Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Events By Feature

sql_features_xeWithin the world of SQL Server there are a few things one can be certain of – things will change. This is true of the features in SQL Server. Additionally, Extended Events is constantly evolving which underscores this constant change.

With all of this change occurring within SQL Server, sometimes it is difficult to figure out how to troubleshoot or track issues that relate to new features. Within the need to figure out how to troubleshoot the new features, there is the need to understand what tools are out there to help troubleshoot.

It is no big secret that Profiler cannot help you trace any of the new features. To help perform the task of tracing events related to new features one must take advantage of the power of Extended Events. Even knowing that you can use XEvents, there is still a need to know what is related to the feature.

In this article, I will show how you can discover the events related to specific features. In addition, I will provide enough base information to help you perform other discovery type queries on your own as you continue your exploration of XEvents. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Events and Features

More and more I am being asked how to track which events belong to which features. I have also been seeing more people ask for a way to list the new features supported by XEvents. Thankfully there is adequate information within the XEvent metadata to help retrieve this type of information. The downside is that there is a bit of homework that must be done across a few versions of SQL Server to help produce the desired information. I have done that work and pulled the information into the following query to help make it easier for anybody else wishing to dive in and compare features and events across versions of SQL Server.

In the preceding query, I have provided an aggregation of the events and features across multiple editions of SQL Server. This aggregate of data is necessary to help see how the features are changing and how the events are changing within the features that remain constant from one version to the next.

Once that data is aggregated, I then provide a couple of sample queries that show what can be done with the data. I need to stress here that I have not provided an exhaustive set of queries to explore this data seven ways to Sunday. I have left those types of exercises up to the reader to explore and experiment.

The first example query shows how to pull the new features that can be “Traced” in the current version of SQL Server on which the query is being executed. The second example provides a difference count between versions of SQL Server to help illustrate the evolution of XEvents within SQL Server. The third query is a simple query to list out the number of events for each feature in your version of SQL Server.

From here, one might wish to explore all of the events that are related to a specific feature. This would be easily accomplished by querying out the data from the #presel temp table based on the feature name. Orrrr…one could query the desired feature by following the guidelines in this article.


In the article today, I have shown how it is possible to see the new events and how they relate to the various features within SQL Server. Being able to correlate events that can trap information about new features can and will help you evolve into that rock-star DBA you are trying to become!

Finding the Right Path

xe_path1I have a fairly large backlog of articles that are in progress or that are planned for my series about Extended Events. Despite the backlog and planned articles, every now and then something else comes up that bumps things around. This article is an example of bumping the schedule around. You can see some of the backlog and the full list of articles in the series by visiting the table of contents – here.

I bring that up for a couple of reasons. First and foremost being that the topic was recently raised as a “need” by some colleagues. The second being that I see the need and how it was lacking in coverage by anything I had already written.

What is this gaping hole in the coverage of Extended Events? To be honest, it is not a very complicated topic or very difficult gap to fill. It’s just something that has been overlooked. The gap boils down to this: how does one consistently find the correct path to the Extended Event Log file (XEL file)?

Filling the Gap

The gap I will be working to fill in this article deals with consistently finding the file path for Extended Event (XE) sessions. This gap rises due a few different things such as the ability to define a target in different manners, being able to move the logs directory, or even the fact that a target may not be added to the session (let alone a file target). These causes can all contribute to a bit of frustration and may pose as different symptoms when trying to get the file path.

One additional complication is tied to the running state of a session. For that complication, I talked briefly about it in my article about better practices. The state of the session could have an impact and could cause frustration when trying to retrieve the file path. All of these things are considerations that must be made when trying to retrieve the file path.

To find the file path, let’s start with some basics. The default path for XEL files is in the log directory of the instance. In addition to this default behavior, each session stores metadata about running sessions as well as deployed sessions that is accessible from DMVs and system catalogs.

Log Path

The first basic to tackle is the log path for the instance. I can query for the log path of the error log for the instance and rely on that as the path of my xel files. Should I choose this method, then I could execute the following query.

Executing that query on my SQL 2014 instance produces the following results:


Now the obvious problems with this method come from the fact that relying on this data is relying upon an assumption that you have set all of your XE Sessions to use the default log path. If you have declared your sessions to use a file target and did not specify a path, then the assumption is safe. However, if you are detail oriented, you probably have been somewhat explicit in how you define your event file target. This brings us to the next topic – defining the path.

Defining The Path

While a bit of bird-walk, it is necessary to cover this topic at this juncture. This brief discussion will help to understand some of the other issues with retrieving the path consistently.

When defining the path of the event path, there is a bit of flexibility in how one can define the file to be used. You can either declare the file as just the file name, or you can define the file as the folder path along with the file name. Both methods are completely legitimate. Unfortunately, this flexibility is what causes some of the pain with retrieving the file path consistently.

Let’s take a look at two quick, and acceptable, ways to add an event file to an XE Session. I will be re-using a session from a previous article for these examples.

This is a pretty standard format I use for creating my sessions. The section I want to highlight though is the set of the filename near the end of the script. I will generally define the entire path for my xel files when I create a session – just as I have done in this example. Defining this path helps me to know exactly where I am putting the session for starters. Equally as important is that this path is easier to retrieve from metadata because I have explicitly defined the path.

Take this next example of the same session but with one minor difference.

The minor difference in this example is just in how the filename was defined. It is technically accurate and acceptable to only use the filename instead of the path and filename as I did in the previous example. The problem here comes from the retrieval of the path from metadata.

Getting the Path

In the previous two examples, I showed two methods of defining the filename within a session. In the former example, this means I can more easily find the path. However, if I use the script in the beginning of the article to find my xel file paths, then I have made an inaccurate assumption. This leads to an inconsistency in how the file path is fetched. In the latter example, the assumptions concerning the default log path would be valid and could be applied here.

Since the assumptions do not prove to be consistent or accurate across the board, we need to both evaluate how the data for each would look and we would need to see how to retrieve this path more consistently. First up is looking at how the data for each of these example sessions would be stored in metadata.

In the preceding query, I am looking at the deployed session metadata for two sessions, both of which start with “AuditSelect”. In the sys.server_event_session_fields view, there is an attribute called name that contains the value “filename”. This value will only exist if the session has an event file target defined for the session. Running the query will yield the following results (on my system).


In this result set, I can see there is one of each of the two event file definition methods I described in the previous section. In the green highlight you will see that I have the entire filepath. In the red highlight, you will only see the filename. Nothing special has been done to this data in the return as you can verify from the posted script. These are the actual stored values. So this would indeed seem like we have a bit of a problem, right?

Well, this is where we get to be a little bit imaginative and employ a script such as the following:

Wow, that is considerably longer than the first example in the article. That is true! This script accounts for a few of the conditions that cause inconsistencies (but not yet all of them) in retrieving the file path for a deployed event session. The key here is to understand this will fetch the path whether you enter a complete path or not while creating the session. I also through a check in there to confirm that the session has a file target as well as a check to ensure the session is deployed to the server.

Looking at the output, I get results that are somewhat friendly and easy to understand without any of the guesswork.


What if the session has never been started and did not have the full path declared? Well, that is one of the inconsistencies I am working on still.

There is an alternative to this method as well. For the time being, this method would also be recommended in the event the session being researched happens to be one of the system sessions that is “private”. This next code chunk will show two different methods to parse the file path from running session metadata in the DMVs.


In the article today, I have shown some of the internals to retrieving file paths for Extended Event Sessions. I dove into metadata to pull out the path for the session and discussed some concerns for some of these methods. In the end, you have a few viable options to help retrieve the file path in a more consistent fashion.

Audit All Select Statements

audit_selectLegislation and regulation sometimes dictates that certain activities must be tracked within a database. On occasion, it will be required that queries be audited and tracked to reach compliance with the legislation or regulation. To achieve this compliance, this article will demonstrate how to use Extended Events to audit statements being executed within the database.

Over the course of a few recent articles, I have included little hints here and there alluding to this article. In fact, now would be a good time to review one of these articles in preparation for what I will be sharing today. Take a moment to refresh your memory on this article about finding the right event – here.


I have to be a little honest here. Prior to somebody asking how they could possibly achieve a statement audit via extended events, I had not considered it as a tool for the job. I would have relied on Audit (which is Extended Event related), or some home grown set of triggers. In this particular request, Audit was not fulfilling the want and custom triggers was not an option. Another option might have included the purchase of third party software but there are times when budget does not allow for nice expensive shiny software.

So, with a little prodding, I hopped into the metadata and poked around a bit to see what I could come up with to achieve this low-budget audit solution.

Low-Budget Audit

Using the handy scripts I have shown previously (and that I just linked to), I was able to explore the Extended Events metadata and find just what may work properly. The requirements in this case were that it needed to be done in XEvents and that it must capture SELECT statements. To find the events that seemed plausible for this task, I plugged the term “SELECT” into my queries as follows:

From this query, there was really only one event that made any sense for my task – “degree_of_parallelism”. There was another event returned in the result set, but it made no sense to me as a possible candidate event for auditing select statements (it was ucs_transmitter_reclassify). From the results, I then took the keyword associated to degree_of_parallelism to see what else might be pertinent. I plugged that keyword “execution” into the following query:

The results of the previous query will be quite a bit more substantial. Some make sense to include in the audit and some do not. I opted to not included any of the events to keep things as clean and simple as possible.

More Requirements

I now have the base events covered that I want to use for my audit. That said, my base requirements are just not extensive enough. When I audit something, I really want to know who did it, when it was done, where it originated and what was it that they did. The degree_of_parallelism event will capture all of the select statements but it does not meet these additional requirements.

In order to meet the additional requirements, I will attach a handful of actions to the degree_of_parallelism event. The addition of the actions will provide all of the data I want and need. Combine the event and actions together, I came up with the following session definition.

This is a very simple session to pull together. I have added a few things in on my predicate to help limit the scope of the session to the AdventureWorks2014 database and to make sure I am not trapping events related to the code completion tools shown. Notice here also that there is a potential chance to optimize this session. Can you find it? Hint: It may be in the predicate. Second hint: re-read this article about predicates.

Now the trick to what makes this work to audit only the select statements is right there in the predicate. I have instructed the session to ignore any statement_type that is not a value of 1. As it turns out, statement_type of 1 is a select statement. To see how these values map, here is a quick query and the correlated results.


Based on these values, should you want to audit a different query type, just change the predicate to use the map_key value that corresponds to the desired statement type.


Testing this session is rather simple. Here is a quick and dirty script to help test it.


Building low cost solutions is a common requirement for the data professional. When given the opportunity, try things out to see what you can build to provide the solution. In this case, I have demonstrated how Extended Events, however unlikely a candidate, can provide a working solution to help audit any select statements occurring within your database.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

BIT9 PathNames

We live in a day and age when security (data, network, server, etc) is seemingly at the forefront of the daily news. In our quest to improve security we seemingly always find more and more products that are supposed to help with that security. Sometimes, those products require a data repository. As it happens, it is not uncommon for that repository to be in a database. Today, I am looking into a specific issue with the Bit9 product.


There are some very good reasons as a matter of fact. One really big reason is that I could find no decent information about this issue. The bigger reason is the seemingly indifferent level of response and the delays I saw in response from the Bit9 support channels.

In this specific case, the first response from their support channels took more than two days. Subsequent responses were more than a week later. For the client, this actually caused delays in a project they were working on. Was the issue significant? It was not an outage causing issue, but it was one that did seem to continue to grow and cause concerns with disk space.

The Issue

bit9It may be appropriate to discuss what Bit9 does prior to breaking into the details about the issue. If you have never had any experience with this product before, you may be running a product from a competitor such as Symantec. Bit9 is a an endpoint protection tool from Carbon Black. This helps to prevent against malware and endpoint attacks. In short it is a security based tool to protect your computing enterprise.

Go ahead and click the image to link to their site if you wish to learn more about how it works. For me, the nitty gritty specifics on how it works is a bit outside the scope of this article.

In short, Bit9 records a record of every filename and every filepath for every client machine. It stores these in a database in SQL Server. This shouldn’t be too much of an issue. For one instance serving about 300 client machines we saw about 1.2 million paths being stored. One another instance with far fewer clients and far fewer actual paths on those clients (physically checked) we had 114 million paths stored in the database. The difference in size was 20GB vs ~160GB. When the server is installed with all defaults, you can imagine how this looked for this instance – a very bloated OS volume.

Investigating this issue from a database perspective, I looked to find what was consuming so much space in the database. To do that, I ran my tablespace script and discovered the following:


A significant standout there with the pathnames table. Looking at the definition of the table I see something far more interesting and disturbing all at once.


I see a table with three columns and two large string fields. Each of these fields has a non-clustered index on it. This may or may not be such a big problem (other than the fact that the size of the non-clustered indexes on this table are much larger than the data) except that each of the string fields is an exact duplicate of the other. That’s right. Within this table, the data is duplicated into this second string field and each field has its own index. Not only does it appear that I have a ton of duplicated data, it appears I have entirely useless indexes (neither had been touched for a read since the server had been up)

I inquired about this to the folks at Bit9, both from a design perspective and from an archival perspective. Absolute silence on the design (expected). The inquiry about archival (or purge) of non-essential data did fetch a response – albeit a painfully slow response. The basic question is: “Is it safe to purge or archive old or unnecessary data.” The response we received was “run this and we will tell you what to do next.”

Great, they sent a script to help determine the state of data within the database. I am not posting their script here. Suffice it to say that the script they sent was not very pretty. They query about 20 tables, union the results from those tables, then perform a not in operation to see how many of the pathnames are invalid. No problem. Executing the script did reveal the following:


There seems to be the problem. 95.89% of the rows being stored in the pathnames table are orphaned records! This is a bit of a problem. The software does not appear to manage removal of invalid paths. From here, I knew what the course of action needed to be and acted on it. A big piece of the equation was provided from the results of the script. Another piece was provided in the makeup of the script. From these pieces of information, I created a purge script to help manage the orphan problem. I then put that script into an agent job and set it to run on a weekly basis.

If you find yourself in the same boat, here is the script needed to purge the data. If running this, I recommend disabling the two non-clustered indexes and then performing the delete (especially if you sit at 96% orphaned and over 100 million rows). After deleting the mass amount of orphans, go ahead and rebuild the indexes to stay in compliance with the software contract until Bit9 responds about the index requirement and the schema of the table.

Yep! It is just like that. The worst part of the whole thing is actually the delete. The delete operation will touch about 37 indexed views to update and delete. Be prepared for a long running process if you have a situation like I encountered.

For kicks, here is what that plan would look like in one of my favorite tools (SQL Sentry Plan Explorer).


What is your pain scale and story?


Easily Shred Event Data

shred_xeIt has been nearly a year since I started an extensive series about Extended Events. Previous to that start, I had already written a bunch of articles about Extended Events.

Among the batch of articles preceding the 60 day series one can find an article about shredding extended event data. Then as a part of the series (the first article in the 60 day series) one can find how to shred the action payload data attached to an extended event session.

You can read the article on shredding XEvent payload data here. Then you can follow that up by reading how to shred the Action data here. And once you are done with those and you really want to learn more about Extended Events, please read the whole series which can be found here.

All of that reading to catch up should keep you busy for a tad bit.


A year is a really long time to go without updating a script – according to some. Today, I have an update for both of the XML shredding scripts you just read about in the list of articles mentioned moments earlier in this post.

Before I dive into the script, let’s revisit a sample of the XML from an event session payload. Within an event session, you can see that there are nodes for both the data and the actions (highlighted in green and red). With all of this information found within the same XML for the session, it somewhat makes sense to try and parse all of the data at once.


In addition to parsing all of the XML for the data and the actions at the same time, it seems to also make sense to generate the statements that would parse the XML within a single effort. As you would have noted, my previous scripts were just that – scripts. That implies executing a separate script for each the data and the actions. Maybe it would make more sense to execute a single script.

leaving_painIt is that notion of a single script that constitutes this upgraded version of the script.

One Script

In the following script, I have tried to accomplish just that – a single script to create the entire XML parser for me, for you, and for anybody wishing to use it. I don’t want to have to remember the subtle nuances of how to parse each of the events each time I need to parse the session data. I want something that is quick, easy, and repeatable.

With all of that said, here is the script that I now use to parse my session data. You should notice that it has been simplified and is more extensive now.

Not only does this script slice and dice for you…Wait that’s not quite right.

This script doesn’t just generate the XML shredding strings for the payload and action data. This script also will produce the rest of the tsql statements that should precede and conclude the XML parsing statements. Yes it uses dynamic sql. That is to your advantage in this case.

Could there be other enhancements? Absolutely! More will be coming.

Enjoy this script and happy parsing.

Extra Extra – Read All About It!

From the comments, you will see that Brent Ozar (blog | twitter) made an excellent suggestion. To be honest, I considered sharing some examples during the initial write of this article. I had opted against it then, not considering it in the same way that Brent puts it. So, in this addendum I have a quick example using an XEvent Session that I have not yet written about in all of my articles on Extended Events (pseudo spoiler alert).

Let’s use the AuditSelects XEvent Session I have deployed and running (remember a session can be deployed but may be in the “stopped” state). If I want to figure out what data has been captured for this session, I can take the session name and plug it into the script shown in this article.


Ignore the green tsql comments for now (you can read them direct from the script). Highlighted in gold is the variable for the session name. By entering a valid session name for this variable, I can restrict the results to just the metadata for that specific session. In addition, note that I have highlighted, in a pale chartreuse, two variables that will give me unique column names for each of the events and actions within each session. This is important because some events have the same name for different attributes as other events. If events with the same attribute names are used in the same session, you will get a duplication of data. Maybe you want the duplicate data. Maybe you don’t.

The session name variable is used in each of the next three code segments. Each segment in the script will generate sql statements that will need to be copied into a new query window. I will leave the further breakdown of the script as an exercise for you. What I want to do here is show the results from executing this script.

When I execute the script, I will have four result sets returned to me (in grid mode). It will look something like shown in the following image.


Breaking this down into three colors is probably easiest to see how things fit together. Highlighted in red, I have the build out of the pre-processing statements and the build of the “select” statement that will be returning our data to us. Then in blue comes the guts of the query – all of the beautiful XML parsing statements. The segments in blue correlates to the columns in the select list. And at the bottom of the image I have the finishing pieces that includes my predicates, from, and joins. Each of the highlighted segments will then be copied and pasted to a new query window and look something like the following.


And there you have it. Sure there are still some manual steps in it, but you no longer need to memorize all of that glorious XML parsing syntax. You only need to copy and paste with this version of the script.

Enjoy your adventures in parsing XML!

Another Query to Help Find the Right Event

Finding the right event or combination of events to monitor may seem like quite a daunting task with so many events to explore and xvnt_crossing(frequently) too little familiarity with Extended Events. In this follow-up article to Extended Event Help Queries, I will share another means to quickly explore the Extended Events metadata in the effort to find the precise event to fit your needs.

Today I will be sharing a new query or two that I have used on more than one occasion to help track down the event(s) that I wanted to, at least, try while troubleshooting specific problems within the SQL Server Instance. I found these queries useful once again in a recent opportunity to help out some friends from the SQL Community. (Maybe, just maybe, these are really just a spin of other previously shared queries.) I will share the most recent experience in another follow-up post.

For the sake of posterity, I am also adding this to the 60 Days of Extended Events Table of Contents.

Is There an Event that Contains pertinent Data?

In my previous article I demonstrated how to find an event based solely on the name or description of the event. This is fantastic if the event name (or description) contains one of the magical words you have used. What if the event name or description has nothing to do with the terms you selected? Or, what if the data you seek may be attached to the event but wouldn’t necessarily stand out as a description for that event (by name or description details for that event)?

Now comes the more difficult task right? If the name or description of the event doesn’t relate to the search terms then you just might overlook a few events and be stuck trying to troubleshoot a problem. An equally big problem this could cause is yet another invisible barrier to using Extended Events. It would be easy to slide down the slippery slope and not transition to Extended Events just because an event, applicable to the problem at hand, could not be found.

This is where searching on the payload comes in handy. Remember from a previous article that the payload is the event data that is attached to each event. Imagine how much more accurate we can become by extending our search from the name and description of the event object to next search on the payload names and descriptions? That is exactly what I will now show.

Basically, I am searching against the object_columns metadata dmv for Extended Events just to find any columns that match my theoretical needs. In this case, I am trying to find anything that may help me find events related to the terms “select” or “statement”. When run, there will be quite a few results for these terms. That may actually be a bit of a problem (just the same as if nothing were returned in the results). There is a fix for that (well probably several fixes as you begin to tweak the query).

One possible fix is to also include map_values to refine the search. Another fix might be to change the search terms. And still other fixes might include tweaking the predicate. For today, I am going to demonstrate the inclusion of map_values as a secondary tweak to help isolate the needed event.

In this iteration, my result-set is trimmed significantly due to the predicate and the requirement for a match based on the map_value name. Ultimately, this trimmed my results down to precisely the events that I needed in this case. With this slightly more advanced version, I have greater visibility into the Extended Event realm to help me find just the right event to match my needs.

Are there other Events related to the new found Event?


This is some pretty awesome stuff so far. But it doesn’t end right there. Many events are tied together to help provide a more holistic view of the problem. These events are tied together via keyword. You can read more about keywords here. By searching for events based on a keyword, we can get a lot more intelligent about the sessions we create with Extended Events.

Taking the single result from the last query and then using the SearchKeyword value, for that event, in this next query – I can potentially go from a Great DBA to a Super DBA.

I intentionally included the columns from each of the matching events. Doing this, I can also see what potential payload may be trapped when I attempt to trap any of the events from this result set.


There is great power in Extended Events. There is more power in being able to query the metadata for Extended Events in order to trap the appropriate information to properly troubleshoot a problem.

Have fun with these scripts and I hope you enjoy.

Extended Event Help Queries

Enjoy this article re-publication from my original work at SQL Solutions Group.

When working with Extended Events, there are times when a little more information is, well, helpful.  You know you want to use extended events to try and monitor for a specific thing xvnt_crossingto happen.  But sometimes you don’t know if there is an event for that “thing”, or maybe you don’t know if there is a session already in place to do that specific task (or if it is even running), or maybe you just need better information about the event to see what kind of data it captures when it is fired.

Sometimes, this information is easy enough to find.  Sometimes, it is less than evident how to find the wanted information.  Some of the information may be visible through the GUI (if you are using SQL Server 2012 or later), and sometimes it is just flat out easier to run a quick script.  Today, I would like to share a few quick scripts to help find some of this information that can make the life of the DBA just a bit easier.

Does an Event Exist?

First let’s tackle the problem of discovery.  When we want to use extended events to try and troubleshoot a problem or to capture more information, it is really good to know if such an event exists.  There are many events that capture data for various different things within SQL Server.  More and more events are being added with each release.  More and more data is being made available to the DBA to help perform a better job and to help the DBA better understand what is really happening within the database environment.

In order to determine if there might be an event, that can provide the data for that one “thing” that may be happening within your environment, we could start by querying the SQL Server Internals.  This next query will do just that for us.

This will query the DMVs related to Extended Events and provide us with the event names that contain the terms used in the filter / predicate.  In this case, I am looking for any events that contain the terms “growth” or “file_size”.  This is in an effort to try and find anything that might be related to database files changing sizes.  You may have seen one of the previous articles on tracking file size changes here or here.

The use of the right term to try and trap the right data for the problem that is happening can greatly decrease the time required to find the correct event.  There may be a bit of an art involved in finding the correct search term.  A little practice can help improve the ability to find the appropriate event more quickly.

Is the Data any Good?

Knowing if an Extended Event exists for the desired problem is one thing.  Knowing what data can be parsed from that event is another thing.  Many times, we may not know what kind of data can be trapped with each of the events.  Many times, we may just not be trapping enough data.  And of course, knowing what kind of data can be trapped by the event may help us in filtering that data down to what is usable for efficient troubleshooting.

Finding that data is just a matter of another simple query against the DMVs.  Let’s take a look at the query that can help us find the data we need:

In this case, I have taken the name of the event about which I want to learn more details.  In addition, I only want to retrieve the data columns that retrieve data that could change.  As you can see, I am investigating “trace_flag_changed”.  You may recall that particular event from a previous article and you can read more about it here.

You may also be looking at the query and thinking it seems a little overly complex for what we want.  Well, it is.  Here is an alternate version:

The discovery of the columns available in each event will give us some options to filter as well as some options in what kind of data we parse from the resulting event.

Did I Already Create that Session?

This is arguably less critical than the others.  After all, some will just drop and recreate the session.  But why drop the session if you don’t need to?  On the other hand, a quick glimpse in the GUI could also display that information.  Sadly that doesn’t work for all versions of SQL Server (e.g. SQL Server 2008).  Maybe you have the session running, or maybe it is simply created and not in the running state.

Finding these sessions and whether they are running or not is rather simple.  Just run this next query and it will help you figure out if the session is there or not.  All you need to do is plug in the name of the session and away you go.

This query has been confirmed to work in SQL 2008 and up through SQL 2014.  It should simplify some of the attempts to better understand what is happening with any sessions that might be out there.

These are a few quick and simple queries to help you along your way to becoming better acquainted with Extended Events.  I hope these will be useful to you and that you will get to know and use extended events regularly.

SQL Server Desired Enhancements

TSQL2sDayHappy Belated Birthday

The monthly Data Professionals blog party has come and gone. It happens the second Tuesday of every month – or at least is supposed to happen on that day. This month, the formidable Chris Yates (blog | twitter) has invited everybody to a birthday party – of sorts. As with many birthdays, there is always somebody that wishes you a happy belated birthday. For this party, it is my turn to offer up that belated birthday. It just so happens there was some coordination between Chris and myself for this belated birthday.

Read all about the invite from Chris’ blog. If you missed the link, here it is again – right here.

Plastic Surgery – Desired Enhancements

SQL Server 2016 has come with a ton of cool features, bells, whistles and well cool stuff (yes redundant). That aside, what are some of the really cool features that I would love to see in SQL Server? Let’s run through them (And yes, I will be a bit greedy. It is standard operating procedure when asking for gifts, right?).

Gift #1

I need some way of being able to reproduce a production database cleanly and efficiently in a different environment. Sure, I can script everything and develop an elaborate process to ensure I got an exact duplicate of the stats, stat steps, stats histogram, schema, procedures, indexes, etc etc etc. Being able to do all of that cleanly and efficiently is the key. This is a pretty big want from clients and could be extremely useful.

Microsoft has heard the pleas. Introduced with SP2 for SQL Server 2014 there is a new DBCC statement to do exactly that – DBCC CloneDatabase. Check out all the details here.

Gift #1, let’s check that off the list.

Gift #2

Instant File Initialization is fantastic and a huge time saver. Unfortunately this only applies to the data files. We need something like this implemented for the transaction log. Currently the transaction log must be “zeroed” or 0-stamped when new space is allocated. This mechanism can delay transactions and impact performance if there happens to be a required file growth or even when trying to manually grow the file or even restore the database.

Believe it or not, Microsoft has addressed this request as well. Microsoft has changed how the transaction log is stamped for a significant performance improvement. This is a part of SQL Server 2016. Bob Dorr explains it very well in his blog post on the topic. You can read his blog post here.

Wow, two for two. We can check gift #2 off the list.

Gift #3

Availability Groups seems to get bogged down under heavy load. The redo and log send seem to get backed up and can have a significant impact on production operations. We need the log transport to be faster.  No, check that. Not just faster it needs to be 2-3x faster.

SQL Server 2016 comes to the rescue again. Amped up on SQL Steroids, Availability Groups has seen a significant improvement in log transport speeds to the secondaries. Some report it as at least twice as fast. The bottleneck has been moved out of the SQL Engine and it has really amped things up from a performance perspective. Here is a supporting article by Jimmy May on the topic – though it doesn’t go deep into the specifics.

Mark another one off the gift registry. Think we can maintain this pace?

Gift #4

Statistics seem to become stale for smaller tables which dramatically affects performance of certain queries. These tables will not see 20% of the rows updated in the leading edge any time before the turn of the year but they would likely change within the six months following the turn of the new year. We need to be able to force these stats to auto-update more regularly without extra intervention.

Fair enough, we already have a trace flag that can help with that (TF 2371). Maybe the environment or management is resistant to having trace flags implemented for something such as this. You never know what the political red tape may dictate.

stackeddeckSQL Server 2016 to the rescue again!!! SQL Server 2016 has this trace flag enabled automatically. You don’t need to do anything extra special. What this means is that those stats on the smaller tables may actually get updated without intervention despite the lack of change to the rows in the table.
That is four for four. Should we take this birthday party to Vegas? Don’t assume I have stacked the deck either! ;0)

Gift #5

I am getting very frustrated with the constant clearing of usage stats every time I rebuild an index. Just because I rebuild the index, it does not mean that I no longer need the usage stats from prior to the index rebuild. I need to be able to see the usage stats for a the time spanning before and after the rebuild without creating a custom process to capture that information. Sure it may not be an insanely difficult task to perform, but it is extra process I have to build out. It’s the principle of the matter.

SQL Server 2016 to the rescue yet again. This age old bug of usage stats being cleared is finally fixed. It is frustrating to say the least to have to deal with this kind of bug. It is a huge relief to have it fixed and be able to get a consistent clear picture of the usage information since the server has been up.

For more information, you could read this article by Kendra Little – here.

Cha-ching. We are now five for five.

resourceflowGift #6

Digging a little deeper on this one. I would really love to see an enhancement to Resource Governor. Not just any enhancement will do. I need it to be enhanced so it will also affect the reporting services engine and the integration services engine in addition to the database engine. I want to be able to use RG to prevent certain reports from over consuming resources within the SSRS engine. Or for that matter, I want to make sure certain SSIS packages do not consume too much memory. If I can implement constraints on resources for these two engines it would be a huge improvement.

We will have to wait for a while on this one. It is currently not scheduled for delivery

Gift #7

This one is going to be a little tougher. It’s not in place. It would be a fantastic gift in my opinion. I would like some tool such as Extended Events to be able to monitor the workload and determine best recommended trace flags to implement.There are many trace flags that reaper_rip_tombstoneare far from well known but could be extremely helpful to production environments based on the workload and internal workflow. Not all would trace flags are built for all environments. An analysis through some automated tool for best recommended flags to implement (again solely at your discretion) would be fantastic.

Gift #8

Get Profiler out of Management Studio finally. Enough said there. There really is no good solid reason in my opinion to keep it around. It is deprecated. It is hardly helpful with 2014 or 2016 and it is just dead weight. Extended Events really is the better way to go here.

Last Request


Can we please fix the spelling of JSON? It really needs to be spelled correctly. That spelling is: JASON.

Awesome SQL Server Feature

TSQL2sDayThe second Tuesday of April 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 77th edition of this monthly blog party. This month the host is Jens Vestergaard (blog | twitter) and he insists we do a little soul searching to figure out what about SQL Server really makes our hearts go pitter patter for SQL Server. Ok, so he didn’t really put it that way but you get the point, right? What is it about SQL Server that ROCKS in your opinion?

Well, I think there are a lot of really cool features in SQL Server that ROCK! It really is hard to pick just one feature because there are a lot of really good features that can make life so much easier as a database professional. Then again, there is that topic that bubbles to the top in my articles – a lot. If you haven’t followed my blog, here is a quick clue: click here.

Why is this feature so AWESOME?

Truth be told, there are a ton of reasons why I really like it. Before diving into the why, I need to share an experience.

A client using Microsoft Dynamics AX to manage the Point of Sale (POS) systems for their retail chain has been running into a problem with the POS database at each store. Approximately a year ago, this client had upgrade most of the store databases to SQL Server Standard Edition from Express due to the size restriction of the Express Edition. This SKU upgrade was necessary because the database had grown to exceed 10GB. Most of this growth was explicitly related to the INVENTDIM table consuming 3.5GB of space in the data file.

Right here, you may be asking what the big deal is. Just upgrade the SKU to Standard Edition and don’t worry about the size of the database. I mean, that is an easy fix, right? Sure, that may be perfectly acceptable in an environment with one or maybe even a handful of servers. Imagine a retail chain with more than 120 stores and a database at each store. No extrapolate standard edition licensing costs for all of those stores. Suddenly we are talking a pretty big expense to just upgrade. All of that just because one table chews up 35% of the size limitation of a data file in SQL Server Express Edition.

What if there was an alternative with SQL Express to mitigate that cost and maintain the POS functionality? Enter the SYNONYM! You may recall from a previous post a thing or two that I have said about synonyms in SQL Server. There is good and bad to be had with this feature and most of the bad comes from implementation and not the feature itself.

Using a synonym, I can extend this database beyond the 10GB limitation – or at least that is the proposed theory. To make this work properly, the plan was to create a new database, copy the INVENTDIM table from the POS database to this new database, rename the old INVENTDIM table in the POS database, create a synonym referencing the new table in the new database, and then select from the table to confirm functionality. Sounds easy right? Here is the script that basically goes with that set of steps.

This seems to make a fair amount of sense. Querying the INVENTDIM synonym produces results just as would be expected. Notice that there is one additional step in the script which I did not mention. That step removes unnecessary rows from the INVENTDIM table based on an actual inventory item or barcode for the particular dimension variant related to the item. This helps to trim the table to specific rows related to the retail store available for purchase there. In addition, it serves as a failsafe to get the data down to less than 10GB in case of failure with the synonym.

failedTesting from within SQL Server proved very optimistic. The synonym was working exactly as desired. Next up was to test the change by performing various transactions through the POS.

The solution not only failed, it failed consistently and dramatically. It didn’t even come close. How is this possible? What is Dynamics AX doing that could possibly subvert the synonym implementation? Time to start troubleshooting.

I checked through the logs. Nothing to be found. I checked and validated permissions. No Dice! I checked the ownership chaining. Still no dice! What in the world is causing this failure?

What if I switch to use a view instead of a synonym? I created a view with cross database ownership chains in tact. Test the application again and still failed. What if I use the synonym pointed to a table in the same database? Test from the application and all of a sudden we have success. Now the head-scratching gets a little more intense.

xe_superheroIt is time to get serious. What exactly is the Dynamics AX POS application doing that is leading to failure that does not happen when we query direct from within Management Studio? The means to get serious is to now implement that awesome tool I alluded to previously – Extended Events (XE or XEvents).

With no clues being available from any of the usual sources (including application error messages), XE or profiler is about the only thing left to try and capture the root cause of this failure. Since this happens to be a SQL Server 2014 implementation (yeah I omitted that fact), the only real option in my opinion was to use XE. Truth be told, even on SQL Server 2008 R2, my go to tool is XE. In this case, here is what I configured to try and catch the problem:

With the session running, I had the POS tests begin again. Bang! It failed again, but I expected it and wanted it to fail again. This time around, finding the problem turned out to be really easy. As soon as the error hit, I was able to check the trapped events and see what it was that had been missing and ultimately causing this string of failures.


Using the GUI (yeah rare occasion for me with XE), I filtered the events down for display purposes only to make it easier to see what was found by running these tests that was pertinent to the problem. Here is the highlighted text a little larger and easier to see:

Snapshot isolation transaction failed accessing database ‘AxRetailDIM’ because snapshot isolation is not allowed in this database. Use ALTER DATABASE to allow snapshot isolation.

Wow! Light bulb shines bright and the clue finally clicks. The POS databases for this client are all set to allow snapshot isolation. Since this error is coming at the time when the failure occurs in the application, it stands to reason that this is the root cause. Time to test by changing the snapshot isolation setting.

That is a quick change and easy enough to test again. With the XE Session still running, and the change in effect, it’s time to test via the POS application again. To my expectations the application is working now. This is good news! Time to test again and again and again to make sure it wasn’t a fluke that it worked and that it was only going to work just the once.

Not a single failure after the change to allow snapshot isolation. One small change with such a big impact and so few clues to be found except in that super Awesome Super Hero feature of SQL Server called Extended Events!

Being able to quickly find the root cause of so much pain is why I enjoy working with the Extended Events feature. It is an efficient way to find a ton of information while causing little overhead to the server.

The bonus here is that XE allowed us to pinpoint a problem with the proposed solution to help save costs while extending a database beyond the 10GB limitation of SQL Express.

Note: I left some notes in the XE session script. These notes help to point out differences between implementing this particular session on SQL Server 2012 (or later) and SQL Server 2008 (or R2).

«page 1 of 15

October 2016
« Sep    


Welcome , today is Monday, October 24, 2016