Tricks and Treats with XE

Halloween is a great time of year. It is unfortunate that it is just one day of the year. That said, I do like to think of the phantasmripmonth of October as Halloween Month. I have several posts over the years that geek out over the cross-over between Halloween and SQL Server.

With the proximity to Halloween, it’s not hard (knowing me) to figure out why I originally decided to terminate this series on the Eve of Halloween day. Then again, as it turns out, I decided to bring the series back to life on the other side of Halloween. In short, it was to be killed on Halloween Eve and then implicitly converted to some living dead creature. You pick whether it is a zombie (which is far better suited to SQL Server) or a vampire.

If you are interested in the previous Halloween posts, here is a list of a few of them:

XE Related

Ghosts in your Database I

Ghosts in your Database II

Ghosts – an eXtrasensory Experience

DB and Fun Related

All Halloween posts

That list is my Halloween treat this year. Now for the trick with a very strong warning. Because of this warning, I am not posting any code showing how to perform the trick.



I have debated seriously over even publishing this “trick” for Halloween because of how deliciously evil it is. I will try and paint the picture in broad strokes. I will leave it up to you to connect the dots.


A third party vendor has provided an application along with some database requirements. Among these requirements is that the application account must use the ‘sa’ login. You have a strict policy that ‘sa’ must never be used for any applications or by the end-users. This is an extremely protected account by policy. The dilemma you have been presented is that the CEO insists that this application must be used (never happens right?) and the vendor insists the application will not function properly without the use of ‘sa’ (I can hear you chortle at that).

Now you, as the DBA, are stuck between a rock and an even harder place. Being smart (and somewhat smart and sadistic), you insist that the use of the ‘sa’ login should not be performed because it will break SQL Server (in reality we know the login does not break SQL Server, but something inevitably will break due to a mis-step by the person using it a) when they shouldn’t, and b) because they lack proper training). Inside you are grinning from ear to ear because you have some devilish ideas, some of which you learned by attending a session by Rob Volk (twitter) where he shows some devilish methods to deter the use of ‘sa’.

For the sake of the scenario, let’s just say you have a policy preventing the implementation of logon triggers (as suggested by Rob) due to a couple of mishaps a few months back. Somebody implemented a logon trigger that wasn’t properly configured and it happened to prevent all users from accessing the server (including the dba’s). Ruh roh!

And then…

Later in the week, after reading about the power of Extended Events, you decide to play around and do a little discovery. You come across this blog post that shows you how to find all of the possible events within the XEvent Engine. So you run the script that you found and shown here:

And while looking through the events and descriptions you discover that there is an event to log all of the successful logins. Not thinking anything of the third party vendor (because it just makes your blood boil) you begin to dream of the potential for auditing all successful logins (established connections) for all users and documenting who may be doing what and when.

After taking a few notes about the potential for the login event and getting access and logins under better control, you continue along your journey through Extended Events by reading this article about Actions. Feeling comfortable enough from the article, you do what any good data professional, trying to learn a new topic, would do – you start exploring using the provided query:

While looking through the available actions, you see one in particular that causes you to mangledraise an eyebrow (not both, just one). There is an action called “debug_break”. You squirm and ponder for quite a bit at the name and definition of this particular object. You wonder out loud “why would anybody ever want that in there?”

Your co-worker interrupts with “Do what?”

To which you reply “Oh nothing important. It was just something I read.” After which you dismiss it, realize the time of day, pack up and head home for the evening. Hours later after the work day has long since been finished, you awaken in a cold sweat with an “Aha!” that startles your slumbering neighbors dog. Immediately you pull out your laptop, connect to your local sandbox instance and get to work with a wry smile and devious giggle.

Upon returning to work the next day, you call the vendor and work out a solution to build them a sandbox server to provide a proof of concept. You grant them the use of sa to use for their application login. Sitting down together and connected to the new sandbox, the vendor attempts to login and the following happens:


Slack-jawed and speechless the vendor pales in complexion. Deep down inside you giggle like an elementary grade school girl – content with yourself. BATTLE WON!

After the vendor departs with a promise to rework the security requirements, you restart the service on the sandbox and go about the rest of your work week with a silly grin from ear to ear and nobody the wiser.

That concludes the trick. In the tradition of Halloween, you knocked on my virtual door of Extended Events and I, in accordance with the traditions of Halloween, gave you a trick and a treat.

I hope this helps you to understand the power of Extended Events. Thinking through a problem and reaching a solution is what XEvents can help to provide. I hope this Halloween XE article was enjoyable to you. Stay tuned as the XE 60 Day series goes full Zombie starting next week.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Default Sessions

defaultsSo I have blasted through a lot of technical articles about Extended Events. There is a lot of groundwork that has been covered with layers of building blocks that have been stacked. Now it’s time to take a step back as I near the end of the original goal for this series.

I say original goal, but that really only means the goal pertaining to the length of the series. I had a goal for content that has yet to be reached. Due to that, I have pushed the series into overtime. So, really, that means this is merely a step back to a lighter topic today as I jump into half-time (or something like that) of the series.

Default Sessions

In the previous article I mentioned some of the default sessions for Azure SQL Database. I will not be going into any further detail about those sessions at this time. Discussion on those default sessions will have to be reserved for a later date.

The focus of this article will be to introduce the default sessions deployed with SQL Server. Some of these are internal and “private” while others are more “public” and like a user defined session.


The system_health default session is the most common of all of the default sessions. This session is akin to a true black box recorder in SQL Server. There is a ton of information that can be trapped via this event session. One caveat to this session is that if you are looking at the session in SQL Server 2008/R2 then the session really should be modified to include a file target.

sp_server_diagnostics session

The name of this session almost gives it away. This is an internal private session that executes a stored procedure by the same name: sp_server_diagnostics. Since it is a private internal session, I do not have all of the details on the internals. But since it is related to the sp_server_diagnostics stored procedure, it isn’t hard to make the leap at what it does. In addition, there are some events with a similar name which helps to better understand what is happening here.

The sp_server_diagnostics stored procedure captures diagnostic and health information about SQL Server to help alert to potential failures. This procedure runs continuously in repeat mode. Here are some of the results from my laptop for this procedure:


Notice there are five different domains of interest in this result set. If I look at the events by the same namesake, I will see the following possible data points:


Wait, there’s more! Diving in deeper into the event, I can see the following possible component_names / domains for the maps in this event:


In this last result set, there are multiple data points that map back directly to the stored procedure. In addition to the direct mapping, this result demonstrates the additional domains of “AGS” and “LAST”. I don’t have any AGS created on this machine so no stats would be recorded for that. I don’t know what “LAST” represents. Then the sp_server_diagnostics_state maps directly to the “state” seen in the stored procedure.

Having this same information divulged via extended events doesn’t end here. The system_health session collects the sp_server_diagnostics_component_result event by default. This all ties together back to the “sp_server_diagnostics session” which appears to be the manifestation of the “repeat mode” of the sp_server_diagnostics extended stored procedure. The proc is continuously running and gathering data for the event to trap the information and subsequently send it along to the event session targets.


The internals of this session are the hardest to track down. This is an internal private session. The name of the session does reveal a fair amount of information about the session that can be used to make decent educated guesses at what is being done here. First this session is used for monitoring In-memory OLTP / hekaton / hk (for short) tables and natively compiled stored procedures.

Taking the knowledge that this is used to monitor the “hekaton” engine, one can also deduce that the possible list of events comes from the following (probably others involved too):


A more complete list of events that seem to fit for this session can be retrieved via the following query:

The final piece of interesting information is that this session appears to be tied to the following dll: hkengine.dll. This is a dll that is loaded in SQL Server and it does bear the same naming convention. Here is another query that will show if this dll is loaded:

This session is only available as of SQL Server 2014.


The AlwaysOn_health session is very much like the system_health default session. This is a default session that can be scripted from within SQL Server Management Studio. Think of this session as a black box recorder for your Availability Groups. This session will capture events related to the state of the AG replica, if there is a state change, any errors that occur, and failovers that occur.

To get a grasp of what this particular session does (try and figure out a query to accomplish this before proceeding), one can run the following query:

And the results would be:


Bonus points if you recognized this could be done based on the foundations pieces laid out in previous articles.

This session became available as of SQL Server 2012.

The default sessions in SQL Server cover a lot of points of interest when looking to monitor a database environment. Between these sessions, this is quite a significant black box recorder for SQL Server. Despite the quantity of events that these sessions combine to monitor as your black box recorder, it doesn’t mean that you can’t add more events to monitor. That is the power of Extended Events. It is also important to remember that these sessions are default sessions. What do we like to do with most defaults? That’s right, we tweak it to be more specific to our own environments.

I hope you enjoyed the article, even without the dive into the default sessions of Azure SQL Database. Stay tuned because those sessions may be covered in some future article.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Azure SQL Database

So far I have strictly focused on Extended Events for SQL Server installations. This means I have db_cloud_greenbeen looking at Extended Events from the “physical” or on-premises perspective. Installing SQL Server into an Azure VM falls into this category as well.

What hasn’t received any TLC is Extended Events from the Azure SQL Database perspective. You may be wondering why it matters since I just said that Extended Events is the same in an SQL Server in an Azure VM as it is on-premises. That is Azure and that is the cloud right?

Well, not entirely. Azure SQL Database works quite a bit differently. Azure SQL Database introduces some things into Extended Events that basically turn your XEvent world upside down. Sort of. If you understand that Azure SQL Database is a contained database (in concept), then some of this world flipping I mentioned is merely a tilt and not terribly earth shattering. That said, it does require at least a little bit of attention. Today, I will merely introduce some of the differences and then leave the internals for a later time.

Azure SQL Database


Azure SQL Database uses some of the same dynamic management views(DMVs) and catalog views as SQL Server. I covered those DMVs and catalog views in previous articles here and here – respectively. For the most part, Azure SQL Database uses a different set of views though. Some of the views of interest for querying metadata are listed in the following tables:

Running Sessions

DMV Description
sys.dm_xe_database_session_event_actions Info about actions within active event sessions
sys.dm_xe_database_session_events Info about events within active event sessions
sys.dm_xe_database_session_object_columns Info about event payload for events in an event session
sys.dm_xe_database_session_targets Info about targets within active event sessions
sys.dm_xe_database_sessions Returns a row for each active database scoped event session.

Looking at the names in the list they should be pretty familiar at this point. The real difference is that these are scoped to the database whereas SQL Server is not scoped in the same manner. This is an essential distinction due to the way Azure SQL Database works.

Deployed Sessions

Catalog View Description
sys.database_event_session_actions Info about actions within deployed event sessions
sys.database_event_session_events Info about events within deployed event sessions
sys.database_event_session_fields Returns a row for each customize-able column that was explicitly set on events and targets.
sys.database_event_session_targets Info about targets within deployed event sessions
sys.database_event_sessions Returns a row for each event session in the SQL Database database.

These particular catalog views should also seem somewhat familiar. The major difference in name being that they are scoped to the database instead of server just like the DMVs.

XE Internals Metadata

sys.dm_xe_map_values Returns mappings of internal keys to text
sys.dm_xe_object_columns Metadata for object data definitions
sys.dm_xe_objects Metadata for different available objects within XE Engine
sys.dm_xe_packages Listing of packages registered within XE Engine

And lastly, these are all the same between SQL Server and Azure SQL Database. If you have been following along with the series, these views really should come as no big surprise and there should be a base familiarity with them. That familiarity is part of the reasoning for skipping a deep dive at the moment.


Azure SQL Database still give the possibility of storing event data in multiple different types of targets. Not all standard public targets are available for user defined sessions. One can currently use the ring_buffer, event_counter, and event_file targets. This provides for two different memory type of targets and one file target. The file target does not come without a cost though. To use the file target does require access to an Azure Storage Container and will take some extra effort to properly configure.

Default Sessions

Azure SQL Database also comes with a set of default sessions automatically configured. Not all are running just like in SQL Server, but they are deployed. Even though several sessions are deployed by default, there is a limitation to how many can be running at a time (subject to change). The default sessions are (subject to change):

  • azure_xe_activity_tracking
  • azure_xe_errors_warnings
  • azure_xe_object_ddl
  • azure_xe_post_query_detail
  • azure_xe_pre_query_detail
  • azure_xe_query
  • azure_xe_query_batch
  • azure_xe_query_detail
  • azure_xe_query_execution
  • azure_xe_query_waits
  • azure_xe_query_waits_detail
  • azure_xe_waits


Despite these differences, there is a fair amount of consistency. Discovery of objects and metadata still remains the same. Being able to query the metadata for running or deployed sessions is also in line with the previous articles I have written in this series. In short, despite the slim differences (subject to change) and the platform differences between SQL Server and Azure SQL Database, there are still consistencies and common building block practices that I have shown still apply.


Even though there is a great amount of consistency throughout the extended events engine and between these two disparate platforms (SQL Server and Azure SQL Database), there is one glaring difference that must be discussed. I showed that the catalog views and DMVs are scoped to the database instead of server. Guess what that means for creating the actual session? You got it! An Extended Event Session is scoped/created at the database level rather than the server level. This is done easily as follows:

As I have said, I am not going to go deep into the details and inner working of Extended Events in Azure SQL Database at the moment. I will save those deeper dives for a later time. What I have done at this point is to introduce the basic differences and introduce the realm of XE in Azure SQL Database to you. Don’t be afraid to use Azure SQL Database based off a previous limitation around Extended Events. Things are changing in the cloud and they are changing at a rapid rate.


I hope you enjoyed the article.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Exposure to Internals

engine_indbFor me, some of the most fun stuff with SQL Server is diving deep into the internals. Diving into the internals is not always an easy adventure. Sometimes, it requires a lot of trial and error sometimes. There are times where a dive into the internals can get me “into the weeds”, “down a rabbit hole” on a long windy path with no end in sight. Granted the end usually comes, but you get the idea.

That said, today I am not taking you on one of those journeys. The premise of the article today is to get a little exposure to some of the internals that may cause a bit of curiosity for you. I will share a means to expose some of the settings/stages/steps of various internal processes. These are the sorts of things you may have heard about, but without Extended Events, you might have to dig harder and further to find them.

Exposure to Internals

There are numerous events within the Extended Event engine that expose internals and internal operations. I will not be exploring the internals via actual events. That would be too easy! I want to share the internals available to you not via events, but rather other objects from the engine. Today, all of the internals will be exposed via the “custom” data types. More in-line with the official terminology is that these will be exposed via “Maps”.

The beauty of the maps is that these are free-floating maps. None of them are currently tied to an event. So trying to find an event that will help track these internals is just not going to happen. /* Disclaimer: This is only relevant to current versions. This is something that could change in future versions. */

Knowing those pieces of information, the dive into the internals becomes far easier now. I know (based on prior articles in the series) that maps are exposed via sys.dm_xe_map_values. Since they are not tied to events, I also know that there are currently no fields using the maps within sys.dm_xe_object_columns. Now I want to look at all the possible internals that are tied to maps. This can be done via the following query:

This will produce the following sample of results:


That is just a sample of the internals maps available through the XEvent engine. That is pretty cool. Looking closer at the results, I have highlighted (in green) some really interesting and cool maps. Wow! I can look at the different components of the query_optimizer_tree_id. That seems like a worthwhile look! Let’s look closer at query_optimizer_tree_id now.

I added a few more pieces of information to this query so I can get a bigger picture of the query_optimizer_tree_id map. I want to know the over-arching map name, the package name, the source dll, and I want to know what this map does. The results are as follows:


What a wealth of information into the internal workings of the query optimizer! I can see that the sqllang.dll is the source for the optimizer tree. I can see that this map reveals the different stages during optimization. And I can see that the Package is the sqlserver package in XEvents. This is fantastic! Of course, I may be a little geeky about it, but it is a cool way to get to know some of the internal operations of SQL Server.

What if I want to explore a different map? Well, I can pick one from the first query and then dive in and look just the same way I did for the query_optimizer_tree_id. In fact, let’s repeat that process with a different map, and with a slightly enhanced discovery query:

And the results:


Nothing really earth shattering there. It is just the database state after-all and it should be rather familiar to the data professional.

Some of the internals maps will definitely be far more interesting than others. And the interest level will definitely depend on the person browsing the data. For me, learning more about the optimizer and internals in general is a lot of fun. Consequently, I am drawn more to the maps that can give more information about the internal workings of the optimization tree or even the different operators (logical and physical) within a query plan.

This article demonstrated a quick means to explore various stages of SQL Server internal operations. Accessing this kind of information is just scratching the surface for internals and really just a scratch on the surface of Extended Events. There is a lot of power with the little bit of information that XEvents yields. Explore and enjoy!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Short Circuit Events

shortcircuitI introduced the importance about predicate order previously. I recommend either reading that article for the first time or reading it again if you have already read it. You can find that article – here.

The predicate order is critical. Having the wrong order can result in not trapping the desired events. I cover the critical nature of predicates in the aforementioned article. At least in principle. Today, I want to reinforce the topic through the use of examples.

Short Circuit Events

Housekeeping: For the examples, I will use two different Extended Event sessions. The only difference between the sessions will be the order of the predicates that I will use.

Setting the stage: I have a specific set of queries that are executing within the Adventureworks2014 database. I want to capture just the second query to be executed from that database. Since I want to capture just the second query to be executed, I will be using the package0.counter predicate.

This is an extremely simple scenario for a reason. The simple scenario helps to simplify the creation of the session. And of course, it also makes it much easier to demonstrate the effects of the predicate order.

Before getting into the weeds with the session, here are the basic query statements that will be used across the trials.

As you can see, I will be running the same set of queries multiple times with just the slightest of variation. I will run the first four for the first XEvent session I create which will have the counter listed first in the predicate. Then I will run the second four statements for the session configured with the database filter first in the predicate order.

Without further ado, here are the definitions of the two XEvent sessions:

With these sessions, I will only have one turned on at a time. Running both at the same time can result in invalid results. The process is simple: from separate connections, alternate through the list of queries and then check the session data to find the result. The queries are written to help indicate which query window each should be run. For the results to be valid, I will be sticking to that pattern. In this test case, the pattern will alternate between two connections where one is connected to the tempdb database and the other is connected to the AdventureWorks2014 database.

Running through the trials for the pred_ordercountfirst session, I will receive the following results:


I have cycled through each of the four scripts with the pred_ordercountfirst session in the running state. Notice that I have no event_data? This is because no event has been trapped. When the counter reaches a value of two, the second predicate condition fires. The second query to have been executed was actually run from the tempdb database. Because the second condition fails the check, no data will be captured for this first session. Recapping each iteration would look like the following:


Each query executed caused the counter value to increment as shown by the green box. In the red box is the second iteration which was performed against a connection to tempdb. And in the yellow box, I show that no statements have been trapped in this session.

Here is a query to help validate the session results.

I have not included a means to parse into the session data because there is nothing to parse. That will change with the next trial. I will drop the pred_ordercountfirst and then create the pred_orderdbfirst session. Then I will run the trial in the same fashion as I did for the pred_ordercountfirst session.


It’s a bit small, but the data is there. Looking down into the data a little closer, I will see the following:


From the session data, I can see that the overall third query to execute was the query trapped by the session. This is the correct result based on the order of executions and the requirements defined in the beginning. See the big difference between the configurations of the predicates in these two sessions? Predicate order is important and critical to a proper running event session.

To recap the results by iteration, I have the following:


Similar to the previous session, I have used green to highlight the counter value after each iteration. Since the first query executed was against the Adventureworks2014 database, the counter value incremented to 1. In the next iteration, it remained at one because the source database was tempdb and not AdventureWorks2014. Finally, the counter incremented to two on the next iteration when the source database was AdventureWorks2014. At this point, since both criteria were met, an event was recorded and sent to the event_file target (recall the event processing steps shown here).

To parse all of this data, I used the following statements:

In this article I have just shown how the configuration of predicates for each event will affect the events that are trapped. In TSQL both of these predicates would produce the same results. In Extended Events, the order of the predicates can cause a significant difference in the event that is captured and what is expected to be captured. Furthermore, this processing order can correlate to a difference in observer overhead since more (or less) events may be captured than what is expected. The more data that has to flow throw all levels of filters, the more impact that will be caused by the configuration of the event session and predicates.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Counting Events

After reading through the articles about the pair_matching target (here) and the article on countingthe histogram target (here), this one is going to seem very basic. To be honest, I would consider this to be basic even if it was discussed prior to the two just mentioned. This article will be diving into the shallow depths of the event_counter target.

There is no real surprise to what the function of this target is – to count. The configuration will be very basic in nature. Reading the target will be very basic in nature.

The real use here would be to count event occurrences at a step below the histogram target. Unlike the histogram, this target does not group events. It just counts. You may be asking why use this particular target. That would be a valid question.

Imagine that you just want to know how many times an event has fired but don’t want to add the full overhead of event collection. If an event fires frequently, you can get an idea as to the workload. In addition, you could determine a focus area for troubleshooting at some future point. That could come in handy!

Counting Events

Following the established pattern, before I proceed with trying to configure a session to use this target, I really need to know more about the target. I need to know what I can configure for this target. So, like in previous articles, I just need to query the metadata to find any configurable settings for the event_counter target. The following script will help me do that:

From this query, I now know that I have absolutely nothing to configure that is specific to the target. This reinforces the basic nature of the target.

Now comes the hard part. To effectively use this target, I really should know just what I want to count. Once I get a list of events I want to count, then I need to throw those together into a session. With that in mind, I have the following session with a handful of events that I want to start counting:

After creating that session and letting it run for a little bit in my sandbox, then I am ready to take a peek at the session data. The query to peek into the session data, at a basic level, is just like the query that has been used over the past few articles.

And the results of that query would look something like the following:


Similar to the simplicity in configuration for the target, the data is stored in a very simple manner. In the target, I will only see a count for the event if the event has fired during the session. Otherwise, the entire event node for the event in question will be missing and just the parent package of the event will exist in the data. Notice that there is a node for every package within the Extended Events engine. Most have no events in this session. Had an event for the package been in the session and not fired, I may see something like the following (which happens to be this same session just prior to a deadlock occurrence):


This is handy information and seems to make sense. If an event never fires, why have a node for the data for that event in the session. Sadly, the logic for the packages is a bit reversed. All packages are included regardless. That said, having a placeholder already there can make it a little more efficient for the firing event to be recorded into the target.

With a solid (maybe semi-solid) understanding of what the structure of the target data is, it is time to start parsing the target data into something a tad easier to read and understand.

This will give me the following results in my sandbox:


As is shown in the results, I have results for each of the events specified in the Session. These are the same events and event counts seen in the XML, just in a more friendly format. Most packages have no data due to a lack of events from that package being included in the session.

For this particular server, the only thing that may be of interest is the deadlock occurrence. But since that was caused by a controlled experiment that forced the deadlock, I won’t worry about it. Had this been a production environment, the deadlock may need to be investigated.

Looking a little closer at the query to parse the session data, one could see that it seems rather kludgy. There is probably a better way of parsing the XML, but this was the method I could get to work with my lack of XML / Xquery ability. If you happen to know of a better way to get that data properly correlated from Xquery, I would welcome that in the comments.

In this article I have just shown how to use and configure the event_counter target. In addition, I explained how to get to the data trapped into this target. Lastly, I demonstrated how to find the pertinent information for the target configurations.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Histograms and Events

histogramThe last few articles covered the pair_matching target and some uses and details that go along with it. For a recap, you can read here.

The pair_matching target was just one of the more advanced targets available for use via Extended Events. In this segment, I will be covering another of the more advanced targets – the histogram target.

The histogram target is a target that provides grouping of a specific event within the event session. This group can be configured to be on a data point within the payload of the event session or on an action that has been attached to the event. That said, careful consideration should be given to what the grouping configuration will be.

Histograms and Events

In principle, this is fine and well. As is often the case with Extended Events, it is often more clear to see how this works. The first step to seeing this in action is to figure out what the configurable options are for the histogram target.

From this query, that I have shown a few times now, I can see  the following results:


The histogram target has four configurable options. Note that none of the configurations is mandatory and two have default values assigned. Even though none of the configurations is mandatory, I would say that the source is somewhat mandatory. This is the field (or action) on which to group the data. In addition, I would also say that the filtering_event_name is a mandatory field, unless there is only one event in the session. Even at that, I would recommend getting into the habit of setting a value for this configuration.

In addition to the lack of a mandatory configuration, I want to make particular note of the source_type configuration. This configuration has a default setting of 1. This value correlates to “action_name” as a grouping field type. Keep that value in mind for later recall. In the interim, I have a script that will create an XEvent Session to track deadlocks occurring on the instance. This will be used for the remainder of the demos.

In this session, I have decided I want to trap occurrences of deadlocks to two different targets; the event_file target and the histogram target. On the histogram target, I have configured the filtering_event_name, source_type and source settings. Now, I want to create a second session that I will not start. This additional session will be strictly to show the differences between settings values.

Now with both sessions deployed, I will query the sys.server_event_session_fields view to find what configuration settings are in effect for these particular sessions.


Despite the default value for source_type according to the metadata, it really seems there is no default value based on these results. Keep that in mind when building this target into a session. Much like the filtering_event_name, I recommend just setting the source_type to remove all doubt of the value being used for this configuration.

Now that that is understood, go back and clean up the Deadlocksrc session if you executed that script. I will just be using the Deadlock session from this point.

To ensure data has loaded into the target for the Deadlock session, I have created deadlocks a few times in a few different databases. It is important that the deadlocks have been created in different databases due to the configuration of the XEvent Session and the configuration of the histogram target in this session.

I have configured the grouping field to be database_id. In addition, it’s just not as much fun to show a single data point. This session is designed to help me understand which databases are hotspots for deadlocks. Then I can focus my attention on the hotspots and tune from there.

So how do I query the target data? It really is very much like the previous targets. I will once again start by querying the target with a very basic query.

Looking at just the XML, I will see a structure similar to the following (values will differ):


I have highlighted a couple of areas of interest. In blue, I have highlighted the count. The count is maintained for each grouping . This brings me to the section highlighted in green. This is the value – or database_id that I specified in the “source” configuration for this target. Now all I need to do is query the target and get this data into something a little more friendly. Here is how I have done just that:

I have a few options demonstrating the progression of steps for this particular query. First, as I have shown previously, I have dumped the target data into a temp table for improved performance. Then I query the temp table and parse the XML from there.

In the first shot at parsing the data, all I see is a database_id and a count. This is not very helpful unless the mapping of database id to name (or one queries the information manually). Thus the need for the second query where the data is joined to sys.databases to get this data directly in a single query.

For my particular setup, here are my results:


There is an alternative to this method. Rather than group on the event field called database_id, I could group on the action called database_name. Then I would only need to query the session details. I will leave that choice to you to make.

Back to the results (as contrived as they may be), I can see that I have two databases that seem to be the sources of deadlocks on a pretty regular basis. Well, at least in comparison to the remaining 100 databases in my instance. Now, I can filter down the deadlocks I want to pay attention to by querying the event_file and filtering just for deadlocks that occur in either of these two databases. I could also create a session to trap deadlocks specifically for those particular databases rather than monitoring all databases. Again, this is one of those things that just gives me several options were there may be more than one correct choice.

In this article I have just shown how to use and configure the histogram target. In addition, I explained how to get to the data trapped into this target. Lastly, I demonstrated how to find the pertinent information for the target configurations.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Parsing Matched Events

yinyang_pairI just covered the topic of matching events from definition on down through highlighting a common use for this type of target. With that in mind, it is now time to dive into how to extract the data from the session.

If you need a recap of what exactly a pair_matching target is, or even how to configure it, you can read the article here.

Parsing Matched Events

In fact, it is a good idea to revisit that article and go over the topics presented. Of particular interest should be the script that shows how to find all of the configurable settings for the target.

The other really important take-away from that article is the demo script. That script is repeated here for ease of access.

With that session running on the test server, I can run the following query to explore the structure of the target of the session data. Exploring the structure is a fruitful exercise because one can learn some interesting facts.

The first thing I want to point out is at the end of each event node. There is an action (actually two of them) attached to the session that was not specified in either of the events added to this session. This is shown in the following image:


The area of interest is highlighted in green. By enabling Track_causality on this target type, both of these private activities have been attached to all events in the session automatically. Disabling track_causality will remove both of these activities from the session. Of interesting note, is that in sessions with different target types, track_causality will only enable the attach_activity_id action for the session.

Looking back near the beginning of the session data, the following can be seen:


The interesting difference between this target data structure and the ring_buffer (or even file_target) target is in the properties of the target. This target has a few extra data points called truncated, orphanCount, matchedCount, and droppedCount. These are basically target statistics to give a high level overview of what is happening with the session and how the target configurations may have affected the session data.

The next really big ticket item is that the data in this target will only show the event defined for the begin_event configuration setting. Since sql_statement_starting is the event defined as the begin_event in this particular session, it will be the only event to show up in the target. The events have been paired together where possible and the event of record will be the begin_event. One can surmise that this would be due to the potential for the end_event to never fire.

Being aware of the data structure within the pair_matching target is essential to parsing the data from the target. Fortunately, most of the structure is the same with the major differences being in the root node name and then the statistics available via that root node.

So, how does one query the session data? I will resort back to the following building block query for starters:

Nothing fancy with this query and it should be fairly common place at this point. If I take that query to the next level and dump the target data into a temp table, I can query the target more quickly and avoid the cost of re-querying the target each time I wish to change between gathering stats and gathering event details. In the following example, I show just how to do this:

In the second segment of that script, I have queried the properties of the target PairingTarget root node. By doing this, I can validate how well my session is performing. This will produce the following results in my test case:


If I start to see an increase in orphans, or if I see any at all, then I can query further into the session data to figure out what is happening.

And looking at a snip of the results, I would see something like the following:


From these results, I can see the query(ies) that seem to be having problems. I can also investigate the TSQL stack. An interesting note about the TSQL stack is that I can retrieve the beginning offset and the ending offset for the query(ies) involved. Since I am testing with the sql_statement* events, this is a bit of redundant data but it is there nonetheless and can be useful. In addition I can see which database is the source of the query timeouts and I can see the user/logon that executed the query. Both of these would be helpful in determining where to focus the attention and effort first. Who knows, maybe it is an end user that needs a little more training.

I have just shown how to parse the data a couple of different ways for the pair_matching target. With a little effort, this target can be configured to work very well and be a great asset in your troubleshooting efforts.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Matching Events into Pairs

In the last article I showed how to discover all of the settings that are available to a particular Target. I also yinyang_pairmentioned the reason for that being that I would be discussing some of the other Target types since I had already shown various concepts in regards to the ring_buffer and event_file targets.

In this article I will be taking you into the world of the pair_matching target. The pair_matching target should connote that this target works by matching like events into a single pair. Very much like a pair of socks would contain two parts, one for each foot or side of the body, the pair_matching target creates pairs of events that go together. These paired events would typically correlate to one half of the equation – or in more simple terms, they are pairs because they are opposites just like the left shoe is a match to the right shoe.

Matching Events into Pairs

What if I have more than one pair of shoes though? Well, that is part of the equation with this target. Just like I need some sort of indicator that helps me match my shoes properly into pairs, I need to have some sort of indicator with the events in SQL Server, that are firing, that can help me figure out how they are related and should be in the same pair.

The equation to figure out which events are pairs is made available through the implementation of settings. To properly configure the pair_matching target requires: first, an understanding of what the possible configuration settings are, and second, an understanding of what it is that is to be paired.

To figure out what the possible configuration settings are, I will adapt a query from the previous article and use it here:

This would produce the following results:


The previous image are the results that I get for this Target in SQL Server 2014. I am going to start with the obvious question. It was so obvious that I marked the problem area with the seemingly suitable question marks. This is a pairing target, yet the begin_event is not a mandatory setting – at least not according to the metadata. At least the end_event is mandatory! That said, it should seem pretty basic that a begin_event is necessary as part of the equation. Otherwise, what would be matched to the end_event?

Of interest, to me,  in the results for the settings is the default settings values for the begin_event, end_event, respond_to_memory_pressure, and max_orphans settings. Despite the empty string for the begin_event, I believe it should also be NULL just like the end_event. It makes sense to me that the default value is NULL in this case because these should be defined in the session and should not have a default value.

The next setting of interest is the respond_to_memory_pressure setting. The default for this is false. This means that if there are sufficient unmatched events, it could cause memory pressure and potentially have an adverse effect. By enabling this setting, unpaired events could be removed from the buffer which could lead to a higher level of orphaned events.

Last up is the default setting for the max_orphans. This is another optimization that could be enabled to help reduce the memory effects of the session.  If this limit is reached, then unpaired events will start to be removed from the buffer in a FIFO fashion.

The remaining columns wouldn’t make sense to have default values. Nor does it make sense to be set to NULL since they are more or less optional. That said, the remaining configuration options are crucial to designing the formula to efficiently match the events. If a begin_matching_actions setting is added, then the end_matching_actions should also be set. I do, however, want to note right now that the names of these settings are plural. The reason for the setting name to be in plural form is that the configuration accepts a comma delimited list of columns or actions. Just remember to keep the columns and actions in the same order for both the begin and end configurations.

In Action

That was quite a mouthful there. Sometimes it is easier to show how this works with an example. To demonstrate, I want to work with what seems like the easiest example to fit the pair_matching model – query timeouts.

Here, I have the same session with points of interest highlighted.


I have aligned the like begin and end statements together so it is easier to see that these settings accept a comma delimited list and the items are listed in the same order. And, of course, the *_matching_actions settings have actions listed that have been attached to each of the Events that are to be paired. The *_matching_columns map to payload data points within the begin and end events. If using this particular setting, the columns of interest should exist in both events being compared.

These settings, with the proper attention to detail, can lead to an efficient and useful Session to match events. When attempting to throw this kind of session together without a little thought and prep-work, one could end up with a poorly performing session and potentially memory pressure issues (remember this is a memory target type with all the bliss that comes with a memory target).

Speaking of memory, you will need to remember to tune in for the next article where I will explore how to parse the event data for this session.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Know Before You GO – Target Settings

targetdialsThe last few articles have covered various topics for the event_file and ring_buffer target. The topics range from basic queries in how to extract the session data on up through best practices. The fun doesn’t end there!

I will be covering more on the different types of targets over the next several articles. Before diving into the remaining Targets, it would be very beneficial to take a look at yet another core concept. Rather, it is time to return to a core concept and look at it from a different angle.

I would like to revisit the base principle around the Set Operations or configurable fields (recall that some settings are stored in the *_fields object). The remaining targets have a handful of settings to configure. Some of these settings are required while others should be required.

Know Before you GO Target Settings

When adding a Target to a session, a configurable setting may or may not be obviously available for use with the Target. In and of itself, this can be mildly frustrating if the wrong settings are tried with the wrong Target. More frustrating is that some settings are required. While the required setting may be logically deduced, it does not always work out that way.

This frustration can be mitigated through the use of the GUI to create the session. This introduces its own set of problems as was discussed previously. If the session is to be created through script, then the settings available to the Target won’t be presented in a nice friendly fashion like the GUI. So, what can one do?

The answer lies within the metadata for Extended Events. I have shown through several articles how to expose the metadata for working with Extended Events. Exposing the available settings for each of the Targets is not much different from the previous explorations into the metadata.

The first step is to recall that the Targets are objects within Extended Events. Building from that base, the metadata for the Target settings has to be stored somewhere. Clue in that the setting is stored as a Field (or Column) after the session is started, and I can safely presume that the settings would be exposed through the sys.dm_xe_object_columns DMV. With that in mind, my query would look like the following:

This would produce the following results:


A couple of notes on the results and the query are necessary now. By querying the metadata from script, a description of the setting can be exposed. This leads to a better understanding of the intent and purpose of the setting. Think about the occurrence_number field for a moment. Just going by the name, it would be difficult to understand what this setting actually does. As it turns out, it is a directive to the XE Engine to store only the specified number of event occurrences for each event. In addition to a better definition of the setting, I can expose which fields are mandatory. The data for the mandatory fields is contained within the capabilities_desc column.

While this is great and useful information, it does not resolve all of the hair tugging just yet. It could be presumed that since the setting exists it has (or might have) a default value. Especially taking into consideration that there are very few mandatory settings. The presumption on the defaults is quite accurate because there are default values for these settings. Looking through the DMV, there is no column that appears to be the source for these “defaults”. Unless of course you recall from prior articles that there is the column_value column which will hold the default value for these settings. With that, I can now adjust my query to also show me the default settings as follows:

This will show me a result set very similar to the prior results. The main difference being that I am now including the default value. I want to highlight a couple of these defaults:


Anybody familiar with SQL Server defaults should know that a default value of 0 should equate to “unlimited”. If you decide to use the ring_buffer target and do not specify a value for max_memory, you have just given free rain to the XE Engine to use as much memory as physically possible on the server.

Leaving the max_memory setting at its default happens to be dangerous enough that it could cause undue memory pressure and, in extreme cases, cause SQL Server to grind to a halt. This underscores the need to understand what the configuration options are and what they represent. The default values may be fine for many workloads, but really should be evaluated just as should be done with all defaults within SQL Server.

As I mentioned, this is an important topic to discuss at this point prior to more discussions on the different Targets. I will use this type of query throughout the next few articles to help illustrate the settings for the Targets as well as discuss how to get to the target data.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


«page 1 of 40

November 2015
« Oct    


Welcome , today is Wednesday, November 25, 2015