Database Drops in SQL 2012

In the previous article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through the use of Extended Events.

What I forgot to mention is the method I shared was for SQL Server 2014. While the events do exist for SQL Server 2012, there is a minor modification that needs to be made in order to avoid the dreaded error message that says something like:

Msg 25713, Level 16, State 23, Line 1
The value specified for event attribute or predicate source, “object_type”, event, “object_created”, is invalid.

I am sure many of us would rather not have to deal with such a terrible thing as an error when we want to do something that should just work. Well, here is the fix for that error if you tried to deploy that XE Session to a previous version (such as 2012).

Do you see that change? Instead of using the map_value in 2012, one must use  the map_key. This was a good change in 2014 to allow us to use human friendly terms instead of needing to lookup the map_key for everything like in 2012.

In following the theme from the previous article, here is the rest of the setup for testing this XEvent session to see how it would trap that data and how to parse the data from the session.

Enjoy!

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.

Database In Recovery Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 18, 2015

Many moons ago, I published a post that contained a script to aid in easing the anxiety that comes when a database is “In Recovery”. When I pulled that script out to try and use it on a SQL 2012 box, I got a nasty error.  Thanks to that nasty error, I have updated the script to now work on SQL 2012 and SQL 2014.

If you would like to first read the previous post, I invite you to click this link.

Here is the version of the script that will work for SQL 2012 and 2014.

 

Extended Events and Data Types

Comments: No Comments
Published on: April 14, 2015

TSQL2sDay150x150

Today is another one of those installments in the long-running TSQL Party held monthly (a.k.a TSQL2SDAY).

This month we have an open invitation from Mike Donnelly (blog | twitter), asking us to talk about something new we have learned and then to teach about it. You can read the invitation in Mike’s own words on his blog.

Coincidentally, the topic is both pretty straight forward and easy on the one hand while somewhat difficult on the other hand.  Mike said: “The topic this month is straight forward, but very open ended. You must learn something new and then write a blog post explaining it.” I find the topic to be difficult because I will usually blog about a topic when I have learned something new. On the other hand, sharing new stuff  is pretty straight forward and enjoyable.  Enter the brain split!

So, what I have learned recently?

Quite a bit.  But what would I really like to share on this occasion?

For today, I would like to share more information about extended events.  XEvents are great.  There is a wealth of information to be garnered from XEvents.  Furthermore, XEvents provide a great opportunity to keep learning.

While researching some events to help troubleshoot a specific issue, it dawned on me that there was some info that I had only looked at when I was looking at specific events.  I started wondering how much of that info was out there.  So here I will share some of this information that is available to you via queries within SQL Server.  Much of this info is attainable through the re-purposing of some scripts I shared previously – here.

Custom Data Types

This wasn’t too much of a surprise because I had seen them throughout and taken advantage of the custom data types to get better information.  But I might consider these custom data types to be more of the EAV model coming through than custom data types.  One can expose the custom data types through an evaluation of data in the map_values DMV.  Let’s take a look at a script that would lay the groundwork to see these data types.

Evaluating this data, one will see that in addition to the “standard” datatypes such as integer, there will be a “wait_types” data type.  This data type will map to all of the wait types available through extended events.  Additionally, the event that is associated to each of these custom data types is exposed through this query.  When getting ready to use an extended event, knowing the kinds of data that will be exposed through a data point in the session will make the session data more usable.  Knowing there is a custom data type (yes, it is really just a key value pair), can be extremely helpful.

Collection Flags

Many of the available events have “customizable” collection flags exposed.  Understanding that these collection flags can be on or off is essential to saving some hair.  Not all data is automatically collected for all events.  Some prime examples of such events that do not automatically collect certain pieces of data are sp_statement_completed and object_created.  The nice thing about these flags is that they have a value of “customizable” in the column_type field.  Another good thing with these flags is that the description field gives a little documentation on what the behavior should be for the “on” and “off” states.

There is a good reason that some of those may be off by default.  The addition of this information may cause an additional load or may be information overload.  It is up to the consumer to determine if the data is going to be of significant importance.  Once determined, enable or disable the flag as appropriate.

These queries provide a good amount of information about the extent of custom data types as well as the collection flags that may be available to use when creating event sessions in SQL Server.  Understanding that this data and these options are there is important to capturing better event info.

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 7 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

Audit Schema Change Report

Comments: 4 Comments
Published on: March 11, 2015

In a recent article on SSG, I discussed how to use Extended Events to function in a Profiler like fashion.  You can read about that here.  I recommend reading that article first because it helps to lay some of the groundwork for this article.

Within Management Studio, from a righ-click context menu, there is an option to run a report called “Schema Changes History”.  The name of this report might indicate that there is some means to find when a change occurred in the schema and potentially what the change was as well as who made the change.  It does say “Schema Changes History” after-all.

If you run the report, you would be able to see a report similar to the following.

Change_Report

 

This looks like it could be a pretty useful report.  If I drill into the collapsed items, I can get better information.  But, how exactly is this report producing this kind of data?  I don’t have a database audit running to trap this information.  I also don’t have SSDT-BI installed, so I can’t export the report and check the report definition.

I could run a trace and refresh the report data and see what pops there.  I would really hate to use Profiler though, and I really don’t want to go through the hassle of creating a server side trace to figure out the source data definitions.  Then the lightbulb moment occurs.  I already have an XE session that can trace the SQL queries just like Profiler.  Why not use a better tool to do the job and satisfy my curiosity at the same time?

So, go read that article, I wrote for SSG, to get the XE session definition in order to follow along a little better.  With that XEvent session running, I can refresh the report data and start to evaluate what is happening in order to produce the Audit report.

By this point, you have probably thought to yourself that the source of the data must be coming from the default trace.  If you thought that, then you are correct.  It is a good presumption based on a knowledge of what kind of data is being tracked in the default trace.  But I really wanted to know for certain what the source of the data was.  As I perused the XEvent session data, I noticed several queries similar to the following in the results.

XE_Capture

Yes, I am using the GUI (*shrug*) in this image to explore the data from the session.  That is just for display friendliness.  But, as you look through those queries you start to see a few patterns and some obvious signs that the source of the data is the default trace.  Here is a cleaner look at the code being used to get the data-set.

Now, I must confess that I altered it a bit to make sure it was doing what I thought.  Just a little thing like swapping a missing definition for the table variable for a quick insert into a temp table.  But the reality is, it is pretty much the source code of the report.  It is not the prettiest of code for the source, but it works.

Due to the lack of prettiness to the code and the seemingly overly complex means to get to the end result, I decided I wanted to “tune” it up a little bit.

Now, I have something that is easier to read and maintain (my opinion) and works well.  It can also be used easily enough in an RDL should you wish to create an additional report different from the canned report in SSMS.  Or just continue to use it from within Management Studio and check the results without the overhead of the report presentation.

Last Execution of a Proc

SQL Server is full of good stuff.  There are plenty of features to be used.  Plenty of applications to help it.  And there is even plenty of metadata within SQL Server to help you better understand your data and the queries that are being run.

It just so happens that a couple of clients requested some information on this additional metadata.  Both of the clients wanted something just a little different from the other.  After a little bit of thought, it came pretty clearly that what they wanted was definitely available within SQL Server.  The first client simply wanted to know the last time a procedure had been executed.  That is relatively easy enough – granted the procedure has been executed and granted the execution statistics have not been evicted.

The second client also wanted to know some information about the execution of a stored procedure.  But this time, they wanted to get the execution plan.  There are a few ways to trap an execution plan.  You could either run a trace, an XE session, or you could execute the query and grab the plan.  But if you didn’t already have an XE session running or a trace running and the client does not want you to execute the query to trap that plan?

Well, that is not a problem because SQL Server stores this stuff.  As long as the query and plan have not been evicted from the plan cache then you can get the metadata you desire.

Metadata to the Rescue

The metadata that we seek happens to be accessible through the use of several dynamic management views.  These are sometimes called dynamic management objects and are great to use to get to know your data and environment.  This time around, I am mostly interested in the following dynamic management objects: sys.dm_exec_query_statssys.dm_exec_cached_planssys.dm_exec_sql_text , and sys.dm_exec_query_plan.  I am using these objects because I want to trap execution stats along with the query SQL statement as well as the cached plan for that query.

So this is what I came up with to aid each of the clients in getting the data they sought.

Pretty simple and straight forward.  Now, I have this quick query to find the last time a proc was run, as well as a means to trap the execution plan for that query.  If I run that query, I would be able to see something like the following.

Capture

I hope this is useful to you and hope you enjoy.

Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.

Capture

Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

Audit Database File Size Changes

dbsizechange

Recently I shared an article on how to track the growths and shrinks that occur within database files.  I shared that article here, you should read it before proceeding with today’s article.

Near the end of that article I declared that it was a really good method to track that information within SQL Server 2008.  What if you happen to be on SQL 2012 or SQL 2014 (as of this writing)?  Will it work there too?

Does it Work Past 2008?

It is a good question.  Does that extended event session I shared work in SQL Server 2012 or 2014?  Let’s take a quick look.  I am skipping the setup scripts for that XE session in this article, so you will need to get them from the previous article in order to follow along in this segment.

Before we can investigate if this extended event session will work, let’s take a quick look to confirm that the session is running on the server.  We can do that with a query similar to the following.

Running that query will produce results very similar to the following.

SessionCheck

This is good news.  If you noticed, I am querying a couple of views to get this information.  In the server_event_sessions catalog view I can determine if the event exists.  When checking the dm_xe_sessions DMV, I can see if the session is running or not by whether or not the session exists in the view.  When the session is enabled and running, then the DMV will return a record for it. Otherwise, the DMV does not hold a record for the session.

With a session running, we can now validate if it is running properly by running any script that will cause the files to grow or shrink.  Recall that in the previous article, it was shown that any growth or shrink operation will cause an event to fire with this session.  Here is a sample of the query I am running.

That query has three distinct segments.  The first is just to check my file sizes.  The second segment performs my file shrink operations.  And the final segment checks the file sizes again.  Here is what the first and third segments would look like on the Sandbox2 database that I used in the previous article.

filesizes_vold

With the evidence that we have the session running and that there was indeed a file size change, let’s now check the event session data and confirm whether or not the session is working on this SQL 2014 server.

The preceding is the query I am using to query the session data.  Running that query will produce the following results.

nodata

That’s right!  There is no session data despite the event having occurred and despite the session running currently.  We also know that this session works (we demonstrated it on SQL 2008).  So there must be a bug, something is broken.  Right?

What Now?

Since the extended event obviously no longer works, we are stuck with few options.  We could always try resorting back to the default trace.  After all, I demonstrated that the default trace is already trapping information about file shrinks.  That was discussed in the prior article and here as well.

So, what if we tried to go and capture all of the same information from the default trace?  We could certainly try that.  Assuming that the default trace is still running on the server, this query could get us pretty close.

And this does a fairly decent job of getting the info we seek.  Sadly, though, it does not trap all of the necessary information.  Only the DBCC event (event 116) traps the sql statement that triggered the event to be recorded in the default trace.  But for the most part it can be a decent swing at getting the information.  Without the sql statements tied to the event, I’d rather not use it because it really just shows me how many times the size changed, what time the event occurred, and the size of the change.

There has got to be some other way of getting this to work in extended events.  A good question to ask is “Why does the extended event no longer work?”

A little digging, and one might eventually find a document that can shed some light on the problem.  Reading this document, we can see why the event no longer works.  It has been deprecated.  What?  After one release, they decide to take away a critical piece of information?  How can that be?

Time to back up those findings with something a little more authoritative such as this.  Looking at this article, we see that indeed the event was deprecated.  But wait a minute, the event was not just deprecated, it was also replaced with a new event.  We are in business so let’s do some querying within event sessions.

Back in Business

We could have probably spared some time by checking the available events in SQL Server by using this next query.  However, the events used in the previously used event session still exist.  If they did not exist, the session creation would have failed.  This can be a bit misleading, so it is good to have the information from Microsoft that the events have been deprecated and merged into a single event.

This produces the desired results with the new event name specified in that Microsoft article.

2012filesizeevent

Based on this information, a rewrite of the extended event session is possible and necessary.  We can update the extended event session that audits when a database file changes in size.  This will look something like the following session.

And since I happened to have that session also running at the same time as the shrinkfiles that were run previously in this article, I can go ahead and check to see if anything was captured.  To check the session data, I will use the following query.

In this new event for 2012 and beyond, there is different data that is captured.  This means that I have access to better information about what is happening to my database files with regards to the size changes (growths and shrinks).

Conclusion

If you just so happen to be running on SQL Server 2012 or later, you will need to change your event sessions that were tracking file changes.  It is a bit of an exercise to make the change and can be frustrating, but it is well worth it.  The improved data that can be captured is going to help better control and oversee the environment.

Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang=”tsql”]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

[codesyntax lang=”tsql”]

[/codesyntax]

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

«page 1 of 5






Calendar
July 2015
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, July 1, 2015