New Backup Behavior in SQL 2014

Comments: 1 Comment
Published on: April 1, 2014

As has been well publicized, today is the official Release To Manufacturing date for SQL Server 2014.  You can read more about all of that here.

Something that hasn’t received much word is a new feature that is a game changer.  I’m not referring to the advancements with the In-Memory OLTP (aka Hekaton).  The real game changer in my opinion is the way backups will be treated in 2014.

encryptionSQL Server 2014 brings the added functionality of encryption to the database backups.  This is a significant improvement to securing data at rest.  This functionality applies to databases that have been TDE enabled as well as those that are not TDE enabled.  This functionality also applies to backups that are compressed and backups that are not compressed.

The beauty of this functionality is that all backups will be encrypted now by default.  What this means is that you need not configure anything on your end to make it happen.  Despite it being enabled by default, you can change the encryption method should you choose.

Another interesting note with this new default behavior is that all of your database backups will fail out of the box.  You might ask why.  Well, there are some pre-requisites that must be met in order for the encrypted backup to succeed.

Here are those pre-reqs.

  1. Create a Database Master Key for the master database.
  2. Create a certificate or asymmetric Key to use for backup encryption.

If you have not created your DMK, your backups will fail and you will be none the wiser until you try to restore that backup.  That is really the way you want to conduct your duties as a DBA, right?  You guessed it, the backup shows that it succeeds yet it does nothing.

As you move forward with your SQL 2014 implementation, ensure you create those DMKs and ensure your backups are safe.

Oh and in case you haven’t noticed, pay attention to today’s date.

 

SQL 2012 has Launched

Categories: News, Professional, SSC
Tags:
Comments: No Comments
Published on: April 1, 2012

LAUNCH

If you have been hiding under a rock, you have probably not heard that SQL 2012 has launched.  With it, some really cool features are now available to use in our SQL database environments.

Some of these cool new features include some new DMVs, the AlwaysOn feature, and columnstore indexes.

But did you know that there are some new recommended best practices?

Today, I want to share with you a new best practice that involves table naming schemes.

With the release of SQL 2012, it is now advised that tables be named in such a manner that the name is obfuscated.  The reason for this is to slow down any would be attacker from quickly finding which table holds the most sensitive data.  Currently it is not too far fetched for an attacker to look in a database and figure out where the orders or customer confidential information may be placed.  After all, it is fairly common to name those tables in such a manner that the name describes the data.

Now though, the best practice is pushing in a new direction.  And no longer is it even a good idea to use legible words.  The drive is for full obfuscation of the table name.  In an effort to demonstrate, here is a quick script that will create several tables that are obfuscated in name.

[codesyntax lang="tsql"]

[/codesyntax]

I like this new standard.  Now, I will just design images into my tables that describe the database in whole.  As you can see, this particular database would be useful for maybe a space case or maybe a hobbyist with a high enthusiasm for rockets.  Or maybe it just works really well for any demo involved with the SQL 2012 Launch.  Other than a neat little picture, the tables are completely obfuscated.

Conclusion

Keep best practices in mind whenever designing a database.  Maintaining best practices can help protect the data you were hired to manage and protect.  In addition to that, make sure you forget every other word of this post.  It is complete and utter hogwash.  Happy April Fools.

Indexing Just Got Easy

Tags: , ,
Comments: 1 Comment
Published on: April 1, 2011

As a database professional one of the things that we should be familiar with is the use of indexes.  In SQL Server an index helps to improve query performance – when properly implemented.  There are some penalties in play when an index is created.  These penalties are typically associated with the insert update and delete operations.  Other penalties might revolve around a poorly defined index that may cause more work to be done for a given query.  As a database professional, it is within our best interest to strike a harmony between benefit and penalty through analysis and tuning of these indexes.  We want the benefits to offset the penalties for the index to be of worth to the query and system.

In SQL Server we have been given an array of tools over time to help us with this effort.  Microsoft gave us the Index Wizard in SQL 2000.  Then we got a decent upgrade with the Database Tuning Advisor in SQL 2005.  Microsoft has been attempting to continue to make a better more agile product for the database professional so s/he can spend time doing other equally important duties.  It appears that we now have a new tool to help us do just that.

First, let’s recap the predecessors.

Index Tuning Wizard

This tool was born in SQL 2000.  The intent was to give the Database Professional a view into the big picture.  Not just look at one index for one query isolated from other tables, views, and queries.  When used properly, this tool could give the professional a decent idea of the indexes that were needed throughout the system.  A common faux-pas was a blanket acceptance of the proposed indexes – good or bad.  Yes, this tool did recommend many bad indexes.  You can learn more about the tool if you would like, from here.

Database Engine Tuning Advisor

This tool was introduced by Microsoft with SQL 2005.  DTA made significant advancements over the predecessor.  The main purpose is much the same as the Index Tuning Wizard.  DTA did produce more reliable results (IMHO) and seemed to be more efficient at doing it.  Unfortunately, there was still the possibility of getting some bad recommendations.  And yes, the bigger issue is that there was blanket acceptance of all indexes.  You can learn more about this tool here.  This tool is available in SQL 2005, 2008 and 2008 R2.

GAIHA (pronounced like GAIA / GAYA)

GAIHA is Genetic Algorithm Index Heuristic Automation.  This name was chosen for multiple reasons, one of which is the pronunciation similarity with Gaya.  For some Gaya infers “enlighten.”  The hope for this product is to really help the database reach that next level and appear to be enlightened because it runs that much more seamlessly and effortlessly.

Rumor has it that GAIHA is on the docket to be released with a future version of SQL Server.  This tool has the same prime directive as the predecessors.  As the name implies though – it’s like DTA on steroids.  Furthermore, it is entirely automated.  Just like you can enable auto-update of statistics you will be able to enable GAIHA.  Much the same as new statistics get created based on data usage and data volume – indexes will auto-create based on query usage.

The automation of GAIHA will use advanced algorithms and heuristics.  Think of the heuristics as the recording of usage patterns and the system learning how the system is used over and over again.  Then it triggers an algorithm to run hundreds of generations of possible indexes and then picking one that would be the best fit.  Your query that ran in ten seconds today will run in ten milliseconds tomorrow.  What’s more, all you did was use the system.

Is there the potential for bad indexes – yes.  Would you still be able to create your own indexes – yes.  If enabled, would this be entirely maintenance free – no.  I don’t imagine it to be entirely maintenance free.  Take the auto update on your statistics for example.  Though you may have that enabled, you still have to do manual statistics updates on occasion.  I envision the same kind of behavior in the future.

Conclusion

There you have it.  The tools keep getting better and better.  This is true especially when the next generation tool is entirely fictional.  There is no such tool and it was completely derived from imagination.  Would it be nice to have – sure.  I envision the DTA being many magnitudes better if it were recreated to be similar to a product by Quest called TOAD.  I liked the feature in TOAD from many moons ago that allowed me to input a query and see it refactored 100 different ways with varying performance gains.  This would be extremely useful for many Database Professionals.

If you would like to see a genetic algorithm at work in SQL Server, check out this blog by Gail Shaw.

Happy April Fools.

page 1 of 1






Calendar
October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, October 31, 2014