Shredding Extended Event Actions

lovehatedice_v1The other day I wrote about that torrid love/hate relationship DBAs tend to have with working with XML. In that same post, I promised I would have a follow up post about XML in Extended Events.

Well, today I present to you another opportunity for you to renew your love/hate relationship with XML.

In the previous post (which you can read here), I discussed just one facet of shredding the XML related to Extended Events. Today, we have “actions” to discuss. I am not going to get into what an “action” is today. I will save that for a near future post.

For the shredding of the actions, I will use the TreeHuggerCPU Event Session I used in the last article (for the sake of consistency). The session doesn’t necessarily need to be running. I will just pull the actions related to the session from the metadata accessible via the system catalog views.

Should the session be running (and in my case it is for demonstration purposes), I could open the session data and view it as XML and see something like the following:

action_xml

 

I have highlighted two different types of nodes available in the XML data. In the previous article, I discussed the “data” nodes and I have highlighted that in red here. Today, we are talking actions, and those are highlighted in green this time around. It is the “action” nodes that we will be shredding via the following script.

 

With this script, I can either search for all actions tie to an XE Session, for the actions tied to one event within a Single XE Session, for all actions tied to a specific event across multiple sessions, or for all actions that are tied to any event tied to any event session deployed to the server.

Combine this with the previous script and suddenly all of that XML just got several times easier.

Tables, Columns and Defaults Oh My!

Comments: No Comments
Published on: August 19, 2015

Picture this if you could. You inherit a new database either through the change of a job, or finding a black box database server, or maybe due to the acquisition of a new application.

No matter the cause, you are now responsible for maintaining that database and you also really need to become acquainted with it. Some of the acquaintance that is needed is learning how things are related and figuring out what data goes where. Not just what data, but what type of data.

I know that one of the things I am frequently asked is to figure out what data type belongs with which column. Furthermore, does this same column exist in more than one table. It’s nice when things are named consistently and you are able to tie things together by column name.

There are also times, when that column that is so consistently named, might need to be confirmed on data types between each of the implementations. I know I have run into cases when the column might be differently defined in different tables.

So, what to do to tie this all together and get that intimate acquaintance with your database?

Luckily, I have a script that will help you figure out some of these connections.

First, the script.

Now, let’s give it a trial run. See if you can spot the difference by running that against the AdventureWorks database using the value “AccountNumber” for the @ColName parameter.

ColData_Queryresult

 

Maybe there is a legitimate reason for that Column to have a different data type specification than the other two. Maybe it doesn’t. One thing for certain is that you will be able to walk away with some questions to ask and hopefully gain a better understanding!

Table Size Script Update for 2015

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: August 3, 2015

As was recently promised, I have an db_engineupdate to an old tried and true script for calculating the size of an object.

That promise was doled out in the previous article when I discussed the internal table types. You can read the article here. Information from that article was used in this script – so it could be of use prior to taking a gander at this new revision.

And of course, previous releases can be found through the link chain here.

The script was in need of an update the last time around due to the internal table types that had been updated in SQL Server. Those internal tables haven’t really changed in the latest release (at least as far as calculating space is concerned). But, I didn’t like needing to update multiple places in the script for one. For the other part of it, I did not like that It was difficult to know what exactly the internal table numbers actually represented. So I added that documentation to this script.

In addition to that little change, I also wanted to figure out if a table might have been spread across multiple filegroups. Due to this, I added in a delimited list of filegroups.

And the last minor update is that the script now has an option to exclude MS Shipped objects, or to include them, or to look just at the MS Shipped objects.

All of this is included in the effort of trying to get a more complete view of the size of the objects within a database and to be able to do so without needing to loop through the objects.

Note: A quick update to move the index creation outside of the table creation instead of inline since that is a feature not supported in older versions of SQL Server.

Internal Tables and Space Used

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: July 29, 2015

engine_indbSome of the beauty of SQL Server is the complexity that it holds. Some of that complexity is right before our eyes. And some of that requires a little bit of diving into the internals to figure out what is happening or even why it is complex.

I enjoy the complexity that is offered to us through this great technology. Because of some of the hidden complexity, I found myself in a rabbit hole recently trying to figure out what exactly was happening with how table size is being calculated by default.

I have written previously on how to find some of the information pertinent to table size. And sadly, thanks to recent rabbit hole excursions, I found that some of the information in the previous articles was wrong (and accordingly there is even a bit of an error in some documentation but that is a different story – connect filed here).

One of the most common means to calculate size in a database is through the use of sp_spaceused. Over the recent editions, there have been a few minor updates to this stored procedure. But a common theme in every update has been to reference the sys.internal_tables system table to get some internal_types excluded from certain calculations.

I found myself wanting to know just exactly what these internal_types were. I searched BOL and Google. I found some references to some of the table types but they always seemed to just be in code. There was a short list in BOL that had some information, but it was far from complete. What to do? What to do?

What did I do?

Since I couldn’t find all of the internal_types for the internal_tables, I was left to do but one last thing. Well, actually, I resorted to asking around a bit first. I asked a group of MCMs and some people (e.g. Paul Randal – b|t ) for a little help. After those inquiries, I found myself still short of a complete list. That is, complete as far as the exclusion list for sp_spaceused.

My last resort for the time being was to begin testing. I tested various different features and configurations. I did this until I was able to come up with a complete list (with regard to sp_spaceused again). In addition to that complete list, I found a handful of additional internal table types.

Now this investigation and rabbit hole was not just for my own enjoyment. I have to admit it was rather frustrating. I ran into test failure after test failure trying to find the exact internal table types that were referenced in that blasted stored procedure.

I was asked by a friend (blog | twitter) why I was submitting myself to this kind of pain and why it was so important. Well, it’s not just for my enjoyment. SPOILER ALERT: I have an update for the table space script that was planned, and it needs to have a little bit better information in lieu of the “because it says so in BOL” explanation that I had made in previous releases of the script.

But I digress. That will all be better discussed in the next installment of the script. As for today, I want to share my findings of this expedition into the nooks and crannies of the database engine.

The script

I have hard-coded a few things in this script that you will possibly need to change. The most important being that I hard-coded a reference to the AdminDB database to the string splitter that I use. There is a note of that string-splitter in the script. And to use the same one (By Jeff Moden with Community contributions) that I have employed, you can download it from here.

You can see that I have included the types for versions from 2005 through 2016 that I have found to date. In addition, SQL Server 2016 has the same exclusions (for now) as 2012 and 2014. That is, at least within sp_spaceused. In 2016, sp_spaceused does make a call to an extended stored proc called sp_spaceused_remote_data_archive, and I do not yet know what is within that proc. You can bet though, that it is related to the new Stretch feature.

The Tease!

Stay tuned! The new release for the table space script will be coming to you on the other side of this short blogging break!

What’s the Size of that Index?

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: July 27, 2015

I have previously written about finding what the big space consumers are within the database. I even shared scripts in those articles on how to find those big consumers (read all about it here).

sizematters2That is grand when it comes to quickly finding which objects happen to be consuming the greatest amount of space within the database. It really helps to quickly assess where some extra TLC may need to be spent. But what about more specific details? What if you wished to figure out what the size was for each of the indexes within that object?

Well, the solution is pretty simple. I have another script to help determine which indexes are the big space consumers within the specified table for the given database.

 

First, let’s see what that script is.

As you can see, I am parsing a few different measures on space usage from the dynamic management view sys.dm_db_partition_stats. This will tell me information such as how much space is being reserved for each index as well as how much space is actually being used on each index. Then I use a grouping set to summarize that size information so I can then compare the total index size back to the size I would have seen from the table space script (discussed lightly earlier).

Why might I want to do something like this?

That is a really good question! The first obvious answer to a question such as this is in the off chance that you discover (again, from the use of the previous script), that one of your terabyte tables is actually only 150GB of data and the rest of that space consumption is actually from all of the indexes on that table.

When I see that a table has more space allocated due to indexes than due to data, I might be curious which indexes are contributing to that consumption. In addition, I can look at numbers much faster to see if maybe there are multiple indexes that are the exact same size. If they are, I might want to evaluate and dig a little deeper into the definitions of those indexes. We would only really love to have 850GB of indexes on a table to only find that three or four of those indexes are exact duplicates.

From there, I might even decide to look at the usage stats for the indexes in question. Why go about it in this fashion? Well, on some databases it is much faster to run a few quick queries than to try and run a big query against the entire database to get all of the index usage stats. I am sure you are familiar with those times when trying to query physical and operational stats takes hours to complete.

But But But…

I can hear you thinking right now. Another question has popped into your head. What about sp_spaceused. That old proc does work just fine for this as well. But there is something about being able to get to know what contributes to the space of an index and being able to work through how to get it calculated. Additionally, with a minor tweak, this query can provide that (detailed) information very quickly for all objects in the database.

Tweaked query

 

Database Settings Changes – Red Handed

Comments: 1 Comment
Published on: July 8, 2015

One of my pet-peeves (and consequently frequent topic of discussion) is finding database settings (or any setting that has changed) without knowing about it. Worse yet is finding that the change has occurred and nobody claims to have any knowledge of it or having done the deed.

This happened again recently where a database was set to single_user and suddenly performance in the database tanked. Change the database back to multi_user and the performance issues are magically resolved.

Fortunately there is a means to combat this. Well, sort of. The default trace in SQL Server does capture the event that occurs when the database is set to single_user or read_only. Unfortunately, all that is captured is that an Alter Database occurred. There is no direct means of mapping that event to the statement or setting that changed.

This inadequacy got me to thinking. The default trace is looking at a set of specific “events”, why wouldn’t that set of events be available within Extended Events. It only seems logical! So I decided to query the event catalog and lo and behold, I found just the event I was seeking – object_altered. Combine this with a recently used predicate (object_type = ‘DATABASE’) and we are well on our way to having just the trap to catch the source of these database changes red-handed.

Easy enough to create this particular session. The event does not capture the “whodunnit” without a little extra prodding. So, I added in a couple of actions to get that information – sqlserver.nt_username,sqlserver.server_principal_name,sqlserver.client_hostname. Additionally, the event does not explicitly tell me what setting changed – just that some setting changed. For this, I decided to add the sql_text action so I could correlate event to the actual setting being changed. Then to cap it all off, I made sure the predicate specified that we only care about database settings changes as previously mentioned.

Running the session and then testing some settings changes should prove fruitful to capturing good info. Here are a few of the tests that I ran and the results of those tests (by querying the session data).

Now to try and look at the results.

DB Change Data

 

There you have it! I have just been caught red-handed changing my AdventureWorks2014 database to single_user and multi_user.

Bonus

For more ideas on settings and changes and so forth, Andy Yun (blog | twitter) has invited all to participate in TSQL2SDAY on this very topic. He has invited all to talk about their experiences with “default settings” and what you might change them to! You can read about it here. I have another article coming up that will fit just nicely with that. Let’s just call this a preview and maybe it can help you get those tsql2sday juices flowing.

Database Drops in SQL 2012

In the previous article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through the use of Extended Events.

What I forgot to mention is the method I shared was for SQL Server 2014. While the events do exist for SQL Server 2012, there is a minor modification that needs to be made in order to avoid the dreaded error message that says something like:

Msg 25713, Level 16, State 23, Line 1
The value specified for event attribute or predicate source, “object_type”, event, “object_created”, is invalid.

I am sure many of us would rather not have to deal with such a terrible thing as an error when we want to do something that should just work. Well, here is the fix for that error if you tried to deploy that XE Session to a previous version (such as 2012).

Do you see that change? Instead of using the map_value in 2012, one must use  the map_key. This was a good change in 2014 to allow us to use human friendly terms instead of needing to lookup the map_key for everything like in 2012.

In following the theme from the previous article, here is the rest of the setup for testing this XEvent session to see how it would trap that data and how to parse the data from the session.

Enjoy!

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.

Database In Recovery Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 18, 2015

Many moons ago, I published a post that contained a script to aid in easing the anxiety that comes when a database is “In Recovery”. When I pulled that script out to try and use it on a SQL 2012 box, I got a nasty error.  Thanks to that nasty error, I have updated the script to now work on SQL 2012 and SQL 2014.

If you would like to first read the previous post, I invite you to click this link.

Here is the version of the script that will work for SQL 2012 and 2014.

 

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 8 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

Last Execution of a Proc

SQL Server is full of good stuff.  There are plenty of features to be used.  Plenty of applications to help it.  And there is even plenty of metadata within SQL Server to help you better understand your data and the queries that are being run.

It just so happens that a couple of clients requested some information on this additional metadata.  Both of the clients wanted something just a little different from the other.  After a little bit of thought, it came pretty clearly that what they wanted was definitely available within SQL Server.  The first client simply wanted to know the last time a procedure had been executed.  That is relatively easy enough – granted the procedure has been executed and granted the execution statistics have not been evicted.

The second client also wanted to know some information about the execution of a stored procedure.  But this time, they wanted to get the execution plan.  There are a few ways to trap an execution plan.  You could either run a trace, an XE session, or you could execute the query and grab the plan.  But if you didn’t already have an XE session running or a trace running and the client does not want you to execute the query to trap that plan?

Well, that is not a problem because SQL Server stores this stuff.  As long as the query and plan have not been evicted from the plan cache then you can get the metadata you desire.

Metadata to the Rescue

The metadata that we seek happens to be accessible through the use of several dynamic management views.  These are sometimes called dynamic management objects and are great to use to get to know your data and environment.  This time around, I am mostly interested in the following dynamic management objects: sys.dm_exec_query_statssys.dm_exec_cached_planssys.dm_exec_sql_text , and sys.dm_exec_query_plan.  I am using these objects because I want to trap execution stats along with the query SQL statement as well as the cached plan for that query.

So this is what I came up with to aid each of the clients in getting the data they sought.

Pretty simple and straight forward.  Now, I have this quick query to find the last time a proc was run, as well as a means to trap the execution plan for that query.  If I run that query, I would be able to see something like the following.

Capture

I hope this is useful to you and hope you enjoy.

«page 1 of 5






Calendar
September 2015
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, September 3, 2015