Ghosts – an eXtrasensory Experience

ghostrip_fireThis is the last article in a mini-series diving into the existence of ghosts and how to find them within your database.

So far this has been a fun and rewarding dive into Elysium to see and chat with these entities.  We have unearthed some means to be able to see these things manifesting themselves in the previous articles.  You can take a look at the previous articles here.

For this article, I had planned to discuss another undocumented method to look into the ghost records and their existence based on what was said on an msdn blog.  But after a lot of research, testing and finally reaching out to Paul Randal, I determined that won’t work.  So that idea was flushed all the way to Tartarus.

Let it be made very clear that DBTABLE does not offer a means to see the ghosts.  Paul and I agree that the other article that mentioned DBTABLE really should have been referring to DBCC Page instead.

Despite flushing the idea to Tartarus, it was not a fruitless dive.  It just was meaningless for the purpose of showing ghosts via that DBCC command.  I still gained value from the dive!!

All of that said, the remainder of the plan still applies and it should be fun.

Really, at this point what is there that hasn’t been done about the ghosts?  Well, if you are well tuned to these apparitions, you may have received the urge to explore them with Extended Events – sometimes called XE for short.

As has been done in the past, before we board Charon’s boat to cross the River Styx to Hades to find these ghosts in Elysium, one really needs to run the setup outlined here.

With the framework in place, you are now ready to explore with XE.

Look at that! There are several possible events that could help us track these ghosts.  Or at the least we could get to know how these ghosts are handled deep down in the confines of Hades, err I mean the database engine.

Ghost_XE

 

From these possible events, I opted to work with ghost_cleanup and ghost_cleanup_task_process_pages_for_db_packet.  The sessions I defined to trap our ghost tracks are as follows.

You can see there are two sessions defined for this trip down the Styx.  Each session aptly named for our journey.  The first (GhostHunt) is defined to trap ghost_cleanup and sends that information to a histogram target.  The second (SoulSearch) is defined to use the other event, and is configured to send to the ring_buffer.  Since the second event has a “count” field defined as a part of the event, it will work fine to just send it to the ring buffer for later evaluation.

Once I have the traps, I mean event sessions defined, I can now resume the test harness from the delete step as was previously done in previous articles.  The following Delete is what I will use.

Prior to running that delete though, I checked the Event Session data to confirm a starting baseline.  Prior to the delete, I had the following in my histogram target.

 

predelete_count

 

After running the delete, and checking my histogram again, I see the following results.

post_count

 

You can see from this that in addition to the 25 pre-existing ghosts, we had another 672 ghosts (666 of which were from the delete).

This is how I was able to investigate the GhostHunt Extended Event Histogram.

But what about looking at the other event session?

Let’s look at how we can go and investigate that session first and then look at some of the output data.

ghostclean

 

Cool!  Querying the SoulSearch session has produced some information for various ghosts in the database.  Unlike the histogram session that shows how many ghosts have been cleaned, this session shows us some page ids that could contain some ghosts – in the present.  I can take page 1030111 for instance and examine the page with DBCC PAGE as follows.

 

 

pagealtLook at that page and result!! We have found yet another poltergeist.

RIP

Once again we have been able to journey to the depths of the Database engine and explore the ghosts that might be there.  This just happens to illustrate a possible means to investigate those ghosts.  That said, I would not necessarily run these types of event sessions on a persistent basis.  I would only run these sessions if there seems to be an issue with the Ghost cleanup or if you have a strong penchant to learn (on a sandbox server).

Some good information can be learned.  It can also give a little insight into how much data is being deleted on a routine basis from your database.  As a stretch, you could even possibly use something like this to get a handle on knowing the data you support.  Just be cautious with the configuration of the XE and understand that there could be a negative impact on a very busy server.  And certainly proceed at your own risk.

Database Ghosts

phantasmripRecently you may have seen my first article on this topic over at SQL Solutions Group.  If not, here is the link so you can read that article first before proceeding here.  This article is intended as a first of two part follow-up to that article.

Now that you have read that other article, we can recap it a little bit.

You will recall that in that article, I discussed the presence of Ghosts in your database.  I also discussed that those ghosts are a good thing.  They have some benefits to them, of which is a bit of a performance boost for some operations like the rollback of a delete.

In that article I discussed one method with which you could see these ghost records.  In this article, I would like to share an alternate method to be able to see these ghosts.

In order to explore this alternate means, let’s go ahead and follow the first few steps from the other article to get the setup complete so our second ghost hunting foray can begin.

Now with the setup complete, we should once again confirm that we have appropriate data available for the hunt.  Once again a simple query can suffice to show the data.

Upon execution of the check script, we should see something similar to the following data-set.

datasample

 

Great, the data is confirmed and we should have a wonderful chance to find some ghosts once again.  In the previous attempt, we needed an additional plasma blaster in the form of a trace flag.  In this attempt we will hunt these ghosts without that tool and see what we can find.  Why attempt it in this way?  Well, it is simply because I’d rather not use a trace flag if it is not necessary.  If you recall, that trace flag had a couple of noted effects.  One of the effects was that it turned off the ghost cleanup process.  If I can avoid it, I’d rather leave the containment unit in tact.

Now, due to that clerical error of putting Halloween on the wrong date, we need to follow the prescribed policy to delete records prior to creating the proper records.

Notice that we issued the delete in a transaction this time around.  We are going to leave that transaction open for a bit while we hunt those ghosts.

The first step is to verify that some ghosts might be present.  To do that we should run a query such as the following.

From that query we should see something like the following.

idxstats_ghosts

Once again we are on the right track.  We can further confirm the existence of these ghosts through a little more investigation and monitoring.  Let’s try the trick with the dblog function again.

With that query, we should see something like the following result set.

dblog_ghosts

 

Very cool.  We once again can see that these phantasms are in the database.  We have enough information that we can proceed on to the next stage.  We can pass the PageID into DBCC PAGE in order to investigate the ghosts on the page.  If we use the PageID that is circled with the green in the preceding result set, we can get a better feel for these specters.

And we may see results such as the following.

page_ghosts2

Recall that the log says this page has ghost records on it.  When we check the page with DBCC PAGE we can definitely see that there are ghosts on the page.  This is very cool.  Now, had we tried to check for ghost records on the PFS page we would not be able to see the ghost count like we were able to see by enabling the TF.

Once again we have been able to display the existence of ghosts in the database.  In order to get these ghosts to move on from the database to their afterlife, we merely need to commit the transaction or roll back the transaction.

Stay tuned for the next article in this mini-series about ghosts.  Who knows, we may even have a bonus Halloween article since this is Halloween month.

Effects of sp_rename on Stored Procedures

There comes a time when mistakes are made.  Sometimes those mistakes can be as annoying as a spelling mistake during the creation of a stored procedure.  When a mistake such as that happens, we are given a few choices.  One could either rename the stored procedure, drop and recreate the stored procedure or simply leave the mistake alone.

When choosing to rename the stored procedure, one may quickly reach for the stored procedure that can be readily used for renaming various objects.  That procedure was provided by Microsoft after-all and is named sp_rename.  Reaching for that tool however might be a mistake.  Here is what is documented about the use of sp_rename to rename a stored procedure.  That documentation can be read at this link on MSDN.

We recommend you do not use this statement to rename stored procedures, triggers, user-defined functions, or views; instead, drop the object and re-create it with the new name.

And later in the same documentation, one can read the following.

Renaming a stored procedure, function, view, or trigger will not change the name of the corresponding object name in the definition column of the sys.sql_modules catalog view. Therefore, we recommend that sp_rename not be used to rename these object types. Instead, drop and re-create the object with its new name.

Now, a chief complaint against dropping and recreating the stored procedure, as recommended, is that process can cause permissions issues.  I am less concerned about the permissions issues and see that as more of a nuisance that is easily overcome due to great documentation and a few quick script executions to restore the permissions.  Despite that, I think we might have a means to address the rename and permissions issue that will be shared later in this article.

Using sp_rename

When using sp_rename, it would be good to understand what happens and what one might expect to see.  Let’s use the following script to create a stored procedure to step through an exercise to rename a stored procedure and evaluate the results.

When I execute that series of batches, I will get an output that matches the following.

renameme

 

When looking at the results we can see that the use of sp_rename does indeed change the name of the stored procedure as it is represented via sys.objects and metadata.  We can also see that the definition of the stored procedure does not change as it is held within the metadata.

If I choose to check the definition through the use of OBJECT_DEFINITION()  instead of sys.sql_modules, you will be pleased to know that sys.sql_modules calls OBJECT_DEFINITION() to produce the definition that is seen in the catalog view.

Well, that does pose a potential problem.  We see that the object definition is unchanged and may report the name as being different than what the object name truly is.  What happens if I execute the stored procedure?  Better yet, if I can execute the stored procedure and then capture the sql text associated to that plan, what would I see?

Yes!  The renamed stored procedure does indeed execute properly.  I even get three results back for that execution.  Better yet, I get an execution plan which I can pull a plan_hash from in order to evaluate the sql text associated to the plan.  In case you are wondering, the execution plan does contain the statement text of the procedure.  But for this case, I want to look at the entire definition associated to the plan rather than the text stored in the plan.  In this particular scenario, I only see the body of the procedure and not the create statement that is held in metadata.

plan

For this particular execution and plan, I can see a plan_hash of 0xE701AFB2D865FA71.  I can now take this and provide it to the following query to find the full proc definition from metadata.

And after executing that query, I can see results similar to the following.

execplancache_text

 

Now is that because in some way the query that was just run was also running OBJECT_DEFINITION()?  Let’s look at the execution plan for both OBJECT_DEFINITION() and the query that was just run.

obj_def_plan

 

Looking at the XML for that particular plan and we see xml supporting that plan.  There is no further function callout and the plan is extremely simple.

Now looking at the plan for the query involving the query_plan_hash we will see the following.

fngetsql

 

Looking at this graphical plan, we can see that we are calling FNGETSQL.  Looking at the XML for this plan, we can verify that FNGETSQL is the only function call to retrieve the full sql text associated to this plan execution.  FNGETSQL is an internal function for SQL server used to build internal tables that might be used by various DMOs.  You can read just a bit more about that here.

What now?

After all of that, it really looks pessimistic for sp_rename.  The procedure renames but does not properly handle metadata and stored procedure definitions.  So does that mean we are stuck with drop and create as the Microsoft documentation suggests?

If you have access to the full procedure definition you could issue an alter statement.  In the little example that I have been using, I could issue the following statement.

After executing that script, I could check sys.sql_modules once again and find a more desirable result.

And my results…

finallymatching

 

If you don’t have the text to create the proc, you could use SSMS to script it out for you.  It is as simple as right-clicking the proc in question, selecting modify and then executing the script.  It should script at with the correct proc name (the beauty of SMO) and then you can get the metadata all up to snuff in your database.

Of course, if you prefer, you could just drop and recreate the procedure.  Then reapply all of the pertinent permissions.  That is pretty straight forward too.

SSIS: Value does not Fall Within the Expected Range

Categories: News, Professional, SSC
Comments: No Comments
Published on: July 17, 2014

Every now and again I find myself working with SSIS for one reason or another.  Every now and again I find myself fighting with SSIS because it provides less than useful error messages.  This is one of those error messages that can be a pain until you understand what can be causing it.  Albeit, that doesn’t help resolve the problem with the error message or with its usefulness or the pain that it can/will cause you.

execute sql taskThe error message “Value does not Fall Within the Expected Range” is tied to the execute sql task that can be placed in the control flow.

Let’s assume we have such a task in our control flow as demonstrated by the attached image.  We’ll call the Execute SQL Task “Parse Client Code.”

Inside of this task, we will probably have something like what is seen in the next image.

parametermapping

Here we have attached parameters to the task and even assigned those parameters to “variable” names that we might use within the query.  In this case, we are just trying to parse a code from a filename that can be used in downstream processing.  The code might look like the following.

[codesyntax lang="tsql"]

[/codesyntax]

If I run that task at this point, I will get an error.  The error could be for any number of reasons based on the setup that I just showed.  The most common is that the Parameter Name is not really a name but really should be an ordinal position as to when the parameter is used in the script.  The ordinal position is 0 based.

The second issue is the data type that has been selected in error.  This should be a name and not a guid.  This means I should change the data type to the varchar type from the drop down that is available on the parameter screen under data type.

The next issues is the use of the variable name in the script itself.  One should use a ? instead of variable names.  So, this script should be fixed to look like the following.

[codesyntax lang="tsql"]

[/codesyntax]

And the parameter screen should be made to look like the following.

paramfixed

These quick fixes can eliminate or even prevent what might possibly be a headache when dealing with SSIS.

Now, what if you need to have more than one parameter for your query?  No problem.  The same principles apply.  Just map your parameters according to proper data type and to the ordinal position that the parameter needs to be used within the query and you should be all set.

Is your Team Willing to Take Control?

TSQL2sDay150x150

The calendar tells us that once again we have reached the second tuesday of the month.  In the SQL Community, this means a little party as many of you may already know.  This is the TSQL Tuesday Party.

This month represents the 56th installment of this party.  This institution was implemented by Adam Machanic (b|t) and is hosted by Dev Nambi (b|t) this month.

The topic chosen for the month is all about the art of being able to assume.

In many circles, to assume something infers a negative connotation.  From time to time, it is less drastic when you might have a bit of evidence to support the assumption.  In this case, it would be closer to a presumption.  I will not be discussing either of those connotations.

What is this Art?

Before getting into this art that was mentioned, I want to share a little background story.

Let’s try to paint a picture of a common theme I have seen in environment after environment.  There are eight or nine different teams.  Among these teams you will find multiple teams to support different data environments.  These data environments could include a warehouse team, an Oracle team, and a SQL team.

As a member of the SQL team, you have the back-end databases that support the most critical application for your employer/client.  As a member of the SQL team, one of your responsibilities is to ingest data from the warehouse or from the Oracle environment.

Since this is a well oiled machine, you have standards defined for the ingestion, source data, and the destination.  Right here we could throw out a presumption (it is well founded) that the standards will be followed.

Another element to consider is the directive from management that the data being ingested is not to be altered by the SQL team to make the data conform to standards.  That responsibility lies squarely on the shoulder of the team providing the data.  Should bad data be provided, it should be sent back to the team providing it.

Following this mandate, you find that bad data is sent to the SQL team on a regular basis and you report it back to have the data, process, or both fixed.  The next time the data comes it appears clean.  Problem solved, right?  Then it happens again, and again, and yet again.

Now it is up to you.  Do you continue to just report that the data could not be imported yet again due to bad data?  Or do you now assume the responsibility and change your ingestion process to handle the most common data mistakes that you have seen?

I am in favor of assuming the responsibility.  Take the opportunity to make the ingestion process more robust.  Take the opportunity to add better error handling.  Take the opportunity continue to report back that there was bad data.  All of these things can be done in most cases to make the process more seamless and to have it perform better.

By assuming the responsibility to make the process more robust and to add better reporting/ logging to your process, you can only help the other teams to make their process better too.

While many may condemn assumptions, I say proceed with your assumptions.  Assume more responsibility.  Assume better processes by making them better yourself.  If it means rocking the boat, go ahead – these are good assumptions.

If you don’t, you are applying the wrong form of assumption.  By not assuming the responsibility, you are assuming that somebody else will or that the process is good enough.  That is bad in the long run.  That would be the only burning “elephant in the room”.

elephants

From here, it is up to you.  How are you going to assume in your environment?

Using Synonyms to Extend SSIS

Comments: No Comments
Published on: July 3, 2014

There are a million and one uses for synonyms.  There are at least that many uses for SSIS.  The reality is, not all of those uses are good for you nor for your data nor for your database.

Recently I wrote an article about some good and some bad with synonyms.  You can read that article from my work blog site here.  Today, I just want to expand on that article a little bit.  I glossed over some things pretty quick in that article that I though might be fun to explain with a little more detail.

The Bad – a little closer look

First, let’s take a look a little closer at the bad.  More specifically, in the example I used, there was a vendor that created a synonym for everything.  I really do mean everything.  This is one case where using the word “literally” would be accurate.  On the client database, I could run a quick query such as the following and see over 7300 synonyms.

[codesyntax lang="tsql"]

[/codesyntax]

In the case of this client and this vendor, 7300+ synonyms is far too many.  It just led to mass confusion.  If you were to run that query, you might see something like the following image.

massSynonyms

I added a “derived” column to show the total count of synonyms and the record name as it relates to that total.  That is a crazy amount of synonyms.  That just makes me think that somebody got into the krazy kool-aid one day, was bored and gave life to a synonym beast.

The Good – With more detail

On the flip side, in the aforementioned article, I talked about synonyms as a means to tweak performance in SSIS.  Normally I would not tout a synonym as a performance tweak.  So I want to caution that the performance gains are specific to SSIS and a handful of those millions of uses for SSIS.

Let’s just begin with a little bit of background.  For that background, some pictures will be really handy.  So here goes.

SpiceSource

In the preceding image we see a very simple segment of a data flow.

The data source uses a sql command to fetch the data required for the data flow.  In the beginning, it is very straight forward.  You probably have some package lying around with something similar to this.

In the following image, we see what the SQL Command was for that data source circled in red in the previous image.

SQLWithout

In the next image we see a slight tweak to the query.  This time to include a reference to a table that is defined/obfuscated by a synonym.

SQLWith

At this point I can hear some of you saying, “Ok, I see what he is doing.”  While many others are wondering why I just made the query more complex than the previous example.

Well as luck would have it, this change serves a couple of purposes.  1) The data has been staged in a separate database.  That database has a different name in every environment (recall the aforementioned article).  So the synonym minimizes code changes when deploying the package.  2) The synonym allows us to confirm that there is data in the stage table and that the data matches a ClientCode in the destination table.  3) Lastly, the synonym reduces my dataset which reduces memory requirements and also gets the data loaded faster (because it is smaller).

In addition to this minor tweak, I can also do something like the following.

WithoutSynonym

In the preceding image, we see two datasources.  Each datasource is subsequently sorted and then eventually joined.  Much like the previous example, due to naming standards and an effort to try and minimize code changes during deployments, at least one datasource is pulling in too much data.  The data is filtered down due to the Join transformation, but this is not very efficient.

WithSynonym

Through the use of a synonym, the datasources can be reduced to a single datasource.  This will eliminate the need for the Sort transformations and Join transformation.  Removing those three transformations reduced memory requirements.  And like the previous example, since we can trim down the number of records, the data flow will run a little bit faster too.

BigSQLWith

As You can see, the code is simple.  It’s not a super TSQL trick or anything to add a synonym into an existing query.  It just gets referenced like any other table.  Once again, in this case, the synonym is pointing to a table in a staging database.  That table has been loaded as a part of an ETL process and now needs to be manipulated a little bit through some other transformations and then inserted eventually into a “target” database.

Conclusion

As with tuning stored procedures or any TSQL, a similar technique was used here.  Reducing the datasets to contain JUST the data that is needed for the flow.  To facilitate that reduction in data to be just the essential data, I employed synonyms.

The reasons for using a synonym in this case were to: 1) restrict data to precisely what was needed, 2) ensure data being loaded was “constrained” by data in the destination table (e.g. only load for a specific client that does exist), and 3) minimize code changes during deployments.

When dealing with databases that serve the same purpose but follow some absurd naming standard that changes the name between environments, it can become cumbersome to maintain code during deployments.  This is particularly true when dealing with cross database joins or lookups.

Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang="tsql"]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

[codesyntax lang="tsql"]

[/codesyntax]

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

Supported Compatibility Levels in SQL Server

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 21, 2014

It has been well documented and is well known that SQL Server supports certain older versions of SQL Server in a compatibility mode.  This setting is something that can be configured on the database properties level.  You can quickly change to an older compatibility level or revert the change to a newer compatibility level.

Changing the compatibility level is sometimes necessary.  Knowing what compatibility modes are available for each database is also somewhat necessary.  The common rule of thumb has been the current version and two prior versions.  But even with that, sometimes it is warm and fuzzy to be able to see the supported versions in some sort of format other than through the GUI for database properties.

Sure, one could go and check Books Online.  You can find that information there.  Or you could fire up this script and run with the output (as a guideline).

 

This script will return results such as the following.

Picture0002

And if we wanted to see the results for a SQL Server 2014 installation, we would see the following.

Picture0003

The output is displayed in the same format you might see it if you were to use the Database Properties GUI.  That said, if you are using the GUI in SQL Server 2014, you might run into the following.

Picture0006

Notice the additional compatibility level for SQL 2005?  If you check the documentation, you will probably find that compatibility level 90 is not supported in SQL 2014.  In fact it says that if a database is in 90 compatibility, it will be upgraded to 100 automatically (SQL 2008).  You can find all of that and more here.

If you tried to select compatibility 90, you might end up with an error.  If you are on 2014 CTP2, you will probably be able to change the compat level without error.

Anyway, this is the message you might see when trying to change to compatibility 90.

Picture0005

They sometimes say that “seeing is believing.”  Well in this case, you may be seeing a compatibility level in the 2014 GUI that just isn’t valid.  Keep that in mind when using the GUI or trying to change compatibility modes.

New Backup Behavior in SQL 2014

Comments: 1 Comment
Published on: April 1, 2014

As has been well publicized, today is the official Release To Manufacturing date for SQL Server 2014.  You can read more about all of that here.

Something that hasn’t received much word is a new feature that is a game changer.  I’m not referring to the advancements with the In-Memory OLTP (aka Hekaton).  The real game changer in my opinion is the way backups will be treated in 2014.

encryptionSQL Server 2014 brings the added functionality of encryption to the database backups.  This is a significant improvement to securing data at rest.  This functionality applies to databases that have been TDE enabled as well as those that are not TDE enabled.  This functionality also applies to backups that are compressed and backups that are not compressed.

The beauty of this functionality is that all backups will be encrypted now by default.  What this means is that you need not configure anything on your end to make it happen.  Despite it being enabled by default, you can change the encryption method should you choose.

Another interesting note with this new default behavior is that all of your database backups will fail out of the box.  You might ask why.  Well, there are some pre-requisites that must be met in order for the encrypted backup to succeed.

Here are those pre-reqs.

  1. Create a Database Master Key for the master database.
  2. Create a certificate or asymmetric Key to use for backup encryption.

If you have not created your DMK, your backups will fail and you will be none the wiser until you try to restore that backup.  That is really the way you want to conduct your duties as a DBA, right?  You guessed it, the backup shows that it succeeds yet it does nothing.

As you move forward with your SQL 2014 implementation, ensure you create those DMKs and ensure your backups are safe.

Oh and in case you haven’t noticed, pay attention to today’s date.

 

«page 1 of 6






Calendar
October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, October 31, 2014