Shredding Extended Event Actions

lovehatedice_v1The other day I wrote about that torrid love/hate relationship DBAs tend to have with working with XML. In that same post, I promised I would have a follow up post about XML in Extended Events.

Well, today I present to you another opportunity for you to renew your love/hate relationship with XML.

In the previous post (which you can read here), I discussed just one facet of shredding the XML related to Extended Events. Today, we have “actions” to discuss. I am not going to get into what an “action” is today. I will save that for a near future post.

For the shredding of the actions, I will use the TreeHuggerCPU Event Session I used in the last article (for the sake of consistency). The session doesn’t necessarily need to be running. I will just pull the actions related to the session from the metadata accessible via the system catalog views.

Should the session be running (and in my case it is for demonstration purposes), I could open the session data and view it as XML and see something like the following:

action_xml

 

I have highlighted two different types of nodes available in the XML data. In the previous article, I discussed the “data” nodes and I have highlighted that in red here. Today, we are talking actions, and those are highlighted in green this time around. It is the “action” nodes that we will be shredding via the following script.

 

With this script, I can either search for all actions tie to an XE Session, for the actions tied to one event within a Single XE Session, for all actions tied to a specific event across multiple sessions, or for all actions that are tied to any event tied to any event session deployed to the server.

Combine this with the previous script and suddenly all of that XML just got several times easier.

Shredding XML in XEvents

lovehateOne of the biggest pains with Extended Events is the thing we love to hate – XML. XML is so foreign to many DBAs. It’s not relational and often brings nightmares of parsing and performance issues.

Despite that, Extended Events takes advantage of XML much like we have seen in so many other areas of SQL Server. You might be familiar with execution plans, SSRS, SSIS, or maybe even the ring buffer. If you look hard enough, you will find XML within the database engine. Whether you love or hate it, you still have to deal with it. Today, I want to dive into a means of dealing with XML, as far as Extended Events is concerned.

Mad Scientist Lab

Let’s head on over to the lab to dive into XML head first. I will be using a session as an example of which I have previously written – here.

If the session is already deployed – great. If not, you may need to create it to execute (successfully) these scripts. Note that I am starting the session and then fetching some data and then stopping the session. To see some data, you may want to wait a few cycles before stopping the event session.

The sole purpose is just so I can take a look at the session data in XML format. Now that I have some data, I would see something that might look like the following:

For today’s lab, I just want to focus on the “data” node while saving the attributes of the event node, and the action node(s) for another discussion.

xml_datanode

 

The “data” node happens to be the data that is directly tied to an Extended Event event. When looking at the event metadata, this would be called the event columns. Knowing that all of these columns follow a fairly standard format can make it a bit easier to figure out how to query this data. One of the daunting things with XML is figuring out how to query the XML data to make it more relational – a format we may be more accustomed to seeing (as DBAs).

Due to the daunting task of figuring out how to query the XML and because it is a pretty decent format for consumption, I decided to simplify the entire process. Why not write some code that will write the XML parsing code for me? And that is what we have here.

This script will take an event session name, an Extended Event event name, or a combination of both (imagine having multiple events tied to a session) to produce the XML strings automagically. This script does only produce some pretty generic column aliases, so that part is left to the user of the script to edit after generating the XML parse statements.

With this script, I can quickly retrieve all of the XML parse statements for all of the data nodes within the session or event that I specify. This can significantly reduce the amount of time taken to produce a usable script to consume the event session data.

This is just the script to parse the event data. If there are actions tied to the session, this will not produce the statements for those actions. The script for that will be provided in a future article. Stay tuned!

Compressing Outcomes

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 24, 2015

You find yourself cruising along, just doing your thing as a database administrator. Then the CIO plops down into a spare chair in your cubicle to talk about the weather.

Well, not entirely true there. The CIO dropped in because the alerts for disk space have been blowing up everybody’s inbox. He informs you that the situation isn’t going to change anytime soon because new shelves for the SAN just can’t be ordered at the moment. In the end, you are given a directive – just without so many words. Fix the problem, and do it without more disk space.

Fantastic you think to yourself. What to do now. Then you get the Gru light-bulb and it is on!

compressionEnter compression. Ok, compression isn’t something entirely new. It has been a feature shipped with SQL Server since 2008 Enterprise edition and can be just the thing to save the day (and everybody’s inbox before they start ignoring all alerts).

The decision has been made and the biggest tables in the database are now compressed using page compression. But you are concerned that it didn’t work because the space savings isn’t necessarily what you had hoped it would be. Lucky for you, you are running SQL Server 2012 and have the ability to start diving into the internals to confirm whether your worry is fact or fiction.

Using this handy dandy query that you are about to copy and paste from the internet, you can quickly investigate to see if there are pages in the tables that were unaffected by your attempts to compress the table.

Thanks to an undocumented function called dm_db_database_page_allocations, we can investigate quickly whether there are pages that failed to compress. We can also see which compression state they are in – if they are compressed at all.

Adding a script like this to your repository can be an easy aid in the struggle to ensure your expected results either match or don’t match. This would save a bit of time and allow you to move on to bigger and badder things – like 2048.

In addition to looking at the compression status for each page, I have thrown in a little extra. Call it the “considerations” column. Based on activity hitting the table or index, you may want to consider a different level of encryption. This additional data on the end of the output will help start you in that direction.

Tables, Columns and Defaults Oh My!

Comments: No Comments
Published on: August 19, 2015

Picture this if you could. You inherit a new database either through the change of a job, or finding a black box database server, or maybe due to the acquisition of a new application.

No matter the cause, you are now responsible for maintaining that database and you also really need to become acquainted with it. Some of the acquaintance that is needed is learning how things are related and figuring out what data goes where. Not just what data, but what type of data.

I know that one of the things I am frequently asked is to figure out what data type belongs with which column. Furthermore, does this same column exist in more than one table. It’s nice when things are named consistently and you are able to tie things together by column name.

There are also times, when that column that is so consistently named, might need to be confirmed on data types between each of the implementations. I know I have run into cases when the column might be differently defined in different tables.

So, what to do to tie this all together and get that intimate acquaintance with your database?

Luckily, I have a script that will help you figure out some of these connections.

First, the script.

Now, let’s give it a trial run. See if you can spot the difference by running that against the AdventureWorks database using the value “AccountNumber” for the @ColName parameter.

ColData_Queryresult

 

Maybe there is a legitimate reason for that Column to have a different data type specification than the other two. Maybe it doesn’t. One thing for certain is that you will be able to walk away with some questions to ask and hopefully gain a better understanding!

Table Size Script Update for 2015

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: August 3, 2015

As was recently promised, I have an db_engineupdate to an old tried and true script for calculating the size of an object.

That promise was doled out in the previous article when I discussed the internal table types. You can read the article here. Information from that article was used in this script – so it could be of use prior to taking a gander at this new revision.

And of course, previous releases can be found through the link chain here.

The script was in need of an update the last time around due to the internal table types that had been updated in SQL Server. Those internal tables haven’t really changed in the latest release (at least as far as calculating space is concerned). But, I didn’t like needing to update multiple places in the script for one. For the other part of it, I did not like that It was difficult to know what exactly the internal table numbers actually represented. So I added that documentation to this script.

In addition to that little change, I also wanted to figure out if a table might have been spread across multiple filegroups. Due to this, I added in a delimited list of filegroups.

And the last minor update is that the script now has an option to exclude MS Shipped objects, or to include them, or to look just at the MS Shipped objects.

All of this is included in the effort of trying to get a more complete view of the size of the objects within a database and to be able to do so without needing to loop through the objects.

Note: A quick update to move the index creation outside of the table creation instead of inline since that is a feature not supported in older versions of SQL Server.

What’s the Size of that Index?

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: July 27, 2015

I have previously written about finding what the big space consumers are within the database. I even shared scripts in those articles on how to find those big consumers (read all about it here).

sizematters2That is grand when it comes to quickly finding which objects happen to be consuming the greatest amount of space within the database. It really helps to quickly assess where some extra TLC may need to be spent. But what about more specific details? What if you wished to figure out what the size was for each of the indexes within that object?

Well, the solution is pretty simple. I have another script to help determine which indexes are the big space consumers within the specified table for the given database.

 

First, let’s see what that script is.

As you can see, I am parsing a few different measures on space usage from the dynamic management view sys.dm_db_partition_stats. This will tell me information such as how much space is being reserved for each index as well as how much space is actually being used on each index. Then I use a grouping set to summarize that size information so I can then compare the total index size back to the size I would have seen from the table space script (discussed lightly earlier).

Why might I want to do something like this?

That is a really good question! The first obvious answer to a question such as this is in the off chance that you discover (again, from the use of the previous script), that one of your terabyte tables is actually only 150GB of data and the rest of that space consumption is actually from all of the indexes on that table.

When I see that a table has more space allocated due to indexes than due to data, I might be curious which indexes are contributing to that consumption. In addition, I can look at numbers much faster to see if maybe there are multiple indexes that are the exact same size. If they are, I might want to evaluate and dig a little deeper into the definitions of those indexes. We would only really love to have 850GB of indexes on a table to only find that three or four of those indexes are exact duplicates.

From there, I might even decide to look at the usage stats for the indexes in question. Why go about it in this fashion? Well, on some databases it is much faster to run a few quick queries than to try and run a big query against the entire database to get all of the index usage stats. I am sure you are familiar with those times when trying to query physical and operational stats takes hours to complete.

But But But…

I can hear you thinking right now. Another question has popped into your head. What about sp_spaceused. That old proc does work just fine for this as well. But there is something about being able to get to know what contributes to the space of an index and being able to work through how to get it calculated. Additionally, with a minor tweak, this query can provide that (detailed) information very quickly for all objects in the database.

Tweaked query

 

Database Drops in SQL 2012

In the previous article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through the use of Extended Events.

What I forgot to mention is the method I shared was for SQL Server 2014. While the events do exist for SQL Server 2012, there is a minor modification that needs to be made in order to avoid the dreaded error message that says something like:

Msg 25713, Level 16, State 23, Line 1
The value specified for event attribute or predicate source, “object_type”, event, “object_created”, is invalid.

I am sure many of us would rather not have to deal with such a terrible thing as an error when we want to do something that should just work. Well, here is the fix for that error if you tried to deploy that XE Session to a previous version (such as 2012).

Do you see that change? Instead of using the map_value in 2012, one must use  the map_key. This was a good change in 2014 to allow us to use human friendly terms instead of needing to lookup the map_key for everything like in 2012.

In following the theme from the previous article, here is the rest of the setup for testing this XEvent session to see how it would trap that data and how to parse the data from the session.

Enjoy!

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.

Database In Recovery Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 18, 2015

Many moons ago, I published a post that contained a script to aid in easing the anxiety that comes when a database is “In Recovery”. When I pulled that script out to try and use it on a SQL 2012 box, I got a nasty error.  Thanks to that nasty error, I have updated the script to now work on SQL 2012 and SQL 2014.

If you would like to first read the previous post, I invite you to click this link.

Here is the version of the script that will work for SQL 2012 and 2014.

 

Extended Events and Data Types

Comments: No Comments
Published on: April 14, 2015

TSQL2sDay150x150

Today is another one of those installments in the long-running TSQL Party held monthly (a.k.a TSQL2SDAY).

This month we have an open invitation from Mike Donnelly (blog | twitter), asking us to talk about something new we have learned and then to teach about it. You can read the invitation in Mike’s own words on his blog.

Coincidentally, the topic is both pretty straight forward and easy on the one hand while somewhat difficult on the other hand.  Mike said: “The topic this month is straight forward, but very open ended. You must learn something new and then write a blog post explaining it.” I find the topic to be difficult because I will usually blog about a topic when I have learned something new. On the other hand, sharing new stuff  is pretty straight forward and enjoyable.  Enter the brain split!

So, what I have learned recently?

Quite a bit.  But what would I really like to share on this occasion?

For today, I would like to share more information about extended events.  XEvents are great.  There is a wealth of information to be garnered from XEvents.  Furthermore, XEvents provide a great opportunity to keep learning.

While researching some events to help troubleshoot a specific issue, it dawned on me that there was some info that I had only looked at when I was looking at specific events.  I started wondering how much of that info was out there.  So here I will share some of this information that is available to you via queries within SQL Server.  Much of this info is attainable through the re-purposing of some scripts I shared previously – here.

Custom Data Types

This wasn’t too much of a surprise because I had seen them throughout and taken advantage of the custom data types to get better information.  But I might consider these custom data types to be more of the EAV model coming through than custom data types.  One can expose the custom data types through an evaluation of data in the map_values DMV.  Let’s take a look at a script that would lay the groundwork to see these data types.

Evaluating this data, one will see that in addition to the “standard” datatypes such as integer, there will be a “wait_types” data type.  This data type will map to all of the wait types available through extended events.  Additionally, the event that is associated to each of these custom data types is exposed through this query.  When getting ready to use an extended event, knowing the kinds of data that will be exposed through a data point in the session will make the session data more usable.  Knowing there is a custom data type (yes, it is really just a key value pair), can be extremely helpful.

Collection Flags

Many of the available events have “customizable” collection flags exposed.  Understanding that these collection flags can be on or off is essential to saving some hair.  Not all data is automatically collected for all events.  Some prime examples of such events that do not automatically collect certain pieces of data are sp_statement_completed and object_created.  The nice thing about these flags is that they have a value of “customizable” in the column_type field.  Another good thing with these flags is that the description field gives a little documentation on what the behavior should be for the “on” and “off” states.

There is a good reason that some of those may be off by default.  The addition of this information may cause an additional load or may be information overload.  It is up to the consumer to determine if the data is going to be of significant importance.  Once determined, enable or disable the flag as appropriate.

These queries provide a good amount of information about the extent of custom data types as well as the collection flags that may be available to use when creating event sessions in SQL Server.  Understanding that this data and these options are there is important to capturing better event info.

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 8 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

«page 1 of 6






Calendar
September 2015
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, September 4, 2015