Counting Events

After reading through the articles about the pair_matching target (here) and the article on countingthe histogram target (here), this one is going to seem very basic. To be honest, I would consider this to be basic even if it was discussed prior to the two just mentioned. This article will be diving into the shallow depths of the event_counter target.

There is no real surprise to what the function of this target is – to count. The configuration will be very basic in nature. Reading the target will be very basic in nature.

The real use here would be to count event occurrences at a step below the histogram target. Unlike the histogram, this target does not group events. It just counts. You may be asking why use this particular target. That would be a valid question.

Imagine that you just want to know how many times an event has fired but don’t want to add the full overhead of event collection. If an event fires frequently, you can get an idea as to the workload. In addition, you could determine a focus area for troubleshooting at some future point. That could come in handy!

Counting Events

Following the established pattern, before I proceed with trying to configure a session to use this target, I really need to know more about the target. I need to know what I can configure for this target. So, like in previous articles, I just need to query the metadata to find any configurable settings for the event_counter target. The following script will help me do that:

From this query, I now know that I have absolutely nothing to configure that is specific to the target. This reinforces the basic nature of the target.

Now comes the hard part. To effectively use this target, I really should know just what I want to count. Once I get a list of events I want to count, then I need to throw those together into a session. With that in mind, I have the following session with a handful of events that I want to start counting:

After creating that session and letting it run for a little bit in my sandbox, then I am ready to take a peek at the session data. The query to peek into the session data, at a basic level, is just like the query that has been used over the past few articles.

And the results of that query would look something like the following:


Similar to the simplicity in configuration for the target, the data is stored in a very simple manner. In the target, I will only see a count for the event if the event has fired during the session. Otherwise, the entire event node for the event in question will be missing and just the parent package of the event will exist in the data. Notice that there is a node for every package within the Extended Events engine. Most have no events in this session. Had an event for the package been in the session and not fired, I may see something like the following (which happens to be this same session just prior to a deadlock occurrence):


This is handy information and seems to make sense. If an event never fires, why have a node for the data for that event in the session. Sadly, the logic for the packages is a bit reversed. All packages are included regardless. That said, having a placeholder already there can make it a little more efficient for the firing event to be recorded into the target.

With a solid (maybe semi-solid) understanding of what the structure of the target data is, it is time to start parsing the target data into something a tad easier to read and understand.

This will give me the following results in my sandbox:


As is shown in the results, I have results for each of the events specified in the Session. These are the same events and event counts seen in the XML, just in a more friendly format. Most packages have no data due to a lack of events from that package being included in the session.

For this particular server, the only thing that may be of interest is the deadlock occurrence. But since that was caused by a controlled experiment that forced the deadlock, I won’t worry about it. Had this been a production environment, the deadlock may need to be investigated.

Looking a little closer at the query to parse the session data, one could see that it seems rather kludgy. There is probably a better way of parsing the XML, but this was the method I could get to work with my lack of XML / Xquery ability. If you happen to know of a better way to get that data properly correlated from Xquery, I would welcome that in the comments.

In this article I have just shown how to use and configure the event_counter target. In addition, I explained how to get to the data trapped into this target. Lastly, I demonstrated how to find the pertinent information for the target configurations.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Histograms and Events

histogramThe last few articles covered the pair_matching target and some uses and details that go along with it. For a recap, you can read here.

The pair_matching target was just one of the more advanced targets available for use via Extended Events. In this segment, I will be covering another of the more advanced targets – the histogram target.

The histogram target is a target that provides grouping of a specific event within the event session. This group can be configured to be on a data point within the payload of the event session or on an action that has been attached to the event. That said, careful consideration should be given to what the grouping configuration will be.

Histograms and Events

In principle, this is fine and well. As is often the case with Extended Events, it is often more clear to see how this works. The first step to seeing this in action is to figure out what the configurable options are for the histogram target.

From this query, that I have shown a few times now, I can see  the following results:


The histogram target has four configurable options. Note that none of the configurations is mandatory and two have default values assigned. Even though none of the configurations is mandatory, I would say that the source is somewhat mandatory. This is the field (or action) on which to group the data. In addition, I would also say that the filtering_event_name is a mandatory field, unless there is only one event in the session. Even at that, I would recommend getting into the habit of setting a value for this configuration.

In addition to the lack of a mandatory configuration, I want to make particular note of the source_type configuration. This configuration has a default setting of 1. This value correlates to “action_name” as a grouping field type. Keep that value in mind for later recall. In the interim, I have a script that will create an XEvent Session to track deadlocks occurring on the instance. This will be used for the remainder of the demos.

In this session, I have decided I want to trap occurrences of deadlocks to two different targets; the event_file target and the histogram target. On the histogram target, I have configured the filtering_event_name, source_type and source settings. Now, I want to create a second session that I will not start. This additional session will be strictly to show the differences between settings values.

Now with both sessions deployed, I will query the sys.server_event_session_fields view to find what configuration settings are in effect for these particular sessions.


Despite the default value for source_type according to the metadata, it really seems there is no default value based on these results. Keep that in mind when building this target into a session. Much like the filtering_event_name, I recommend just setting the source_type to remove all doubt of the value being used for this configuration.

Now that that is understood, go back and clean up the Deadlocksrc session if you executed that script. I will just be using the Deadlock session from this point.

To ensure data has loaded into the target for the Deadlock session, I have created deadlocks a few times in a few different databases. It is important that the deadlocks have been created in different databases due to the configuration of the XEvent Session and the configuration of the histogram target in this session.

I have configured the grouping field to be database_id. In addition, it’s just not as much fun to show a single data point. This session is designed to help me understand which databases are hotspots for deadlocks. Then I can focus my attention on the hotspots and tune from there.

So how do I query the target data? It really is very much like the previous targets. I will once again start by querying the target with a very basic query.

Looking at just the XML, I will see a structure similar to the following (values will differ):


I have highlighted a couple of areas of interest. In blue, I have highlighted the count. The count is maintained for each grouping . This brings me to the section highlighted in green. This is the value – or database_id that I specified in the “source” configuration for this target. Now all I need to do is query the target and get this data into something a little more friendly. Here is how I have done just that:

I have a few options demonstrating the progression of steps for this particular query. First, as I have shown previously, I have dumped the target data into a temp table for improved performance. Then I query the temp table and parse the XML from there.

In the first shot at parsing the data, all I see is a database_id and a count. This is not very helpful unless the mapping of database id to name (or one queries the information manually). Thus the need for the second query where the data is joined to sys.databases to get this data directly in a single query.

For my particular setup, here are my results:


There is an alternative to this method. Rather than group on the event field called database_id, I could group on the action called database_name. Then I would only need to query the session details. I will leave that choice to you to make.

Back to the results (as contrived as they may be), I can see that I have two databases that seem to be the sources of deadlocks on a pretty regular basis. Well, at least in comparison to the remaining 100 databases in my instance. Now, I can filter down the deadlocks I want to pay attention to by querying the event_file and filtering just for deadlocks that occur in either of these two databases. I could also create a session to trap deadlocks specifically for those particular databases rather than monitoring all databases. Again, this is one of those things that just gives me several options were there may be more than one correct choice.

In this article I have just shown how to use and configure the histogram target. In addition, I explained how to get to the data trapped into this target. Lastly, I demonstrated how to find the pertinent information for the target configurations.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Parsing Matched Events

yinyang_pairI just covered the topic of matching events from definition on down through highlighting a common use for this type of target. With that in mind, it is now time to dive into how to extract the data from the session.

If you need a recap of what exactly a pair_matching target is, or even how to configure it, you can read the article here.

Parsing Matched Events

In fact, it is a good idea to revisit that article and go over the topics presented. Of particular interest should be the script that shows how to find all of the configurable settings for the target.

The other really important take-away from that article is the demo script. That script is repeated here for ease of access.

With that session running on the test server, I can run the following query to explore the structure of the target of the session data. Exploring the structure is a fruitful exercise because one can learn some interesting facts.

The first thing I want to point out is at the end of each event node. There is an action (actually two of them) attached to the session that was not specified in either of the events added to this session. This is shown in the following image:


The area of interest is highlighted in green. By enabling Track_causality on this target type, both of these private activities have been attached to all events in the session automatically. Disabling track_causality will remove both of these activities from the session. Of interesting note, is that in sessions with different target types, track_causality will only enable the attach_activity_id action for the session.

Looking back near the beginning of the session data, the following can be seen:


The interesting difference between this target data structure and the ring_buffer (or even file_target) target is in the properties of the target. This target has a few extra data points called truncated, orphanCount, matchedCount, and droppedCount. These are basically target statistics to give a high level overview of what is happening with the session and how the target configurations may have affected the session data.

The next really big ticket item is that the data in this target will only show the event defined for the begin_event configuration setting. Since sql_statement_starting is the event defined as the begin_event in this particular session, it will be the only event to show up in the target. The events have been paired together where possible and the event of record will be the begin_event. One can surmise that this would be due to the potential for the end_event to never fire.

Being aware of the data structure within the pair_matching target is essential to parsing the data from the target. Fortunately, most of the structure is the same with the major differences being in the root node name and then the statistics available via that root node.

So, how does one query the session data? I will resort back to the following building block query for starters:

Nothing fancy with this query and it should be fairly common place at this point. If I take that query to the next level and dump the target data into a temp table, I can query the target more quickly and avoid the cost of re-querying the target each time I wish to change between gathering stats and gathering event details. In the following example, I show just how to do this:

In the second segment of that script, I have queried the properties of the target PairingTarget root node. By doing this, I can validate how well my session is performing. This will produce the following results in my test case:


If I start to see an increase in orphans, or if I see any at all, then I can query further into the session data to figure out what is happening.

And looking at a snip of the results, I would see something like the following:


From these results, I can see the query(ies) that seem to be having problems. I can also investigate the TSQL stack. An interesting note about the TSQL stack is that I can retrieve the beginning offset and the ending offset for the query(ies) involved. Since I am testing with the sql_statement* events, this is a bit of redundant data but it is there nonetheless and can be useful. In addition I can see which database is the source of the query timeouts and I can see the user/logon that executed the query. Both of these would be helpful in determining where to focus the attention and effort first. Who knows, maybe it is an end user that needs a little more training.

I have just shown how to parse the data a couple of different ways for the pair_matching target. With a little effort, this target can be configured to work very well and be a great asset in your troubleshooting efforts.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Matching Events into Pairs

In the last article I showed how to discover all of the settings that are available to a particular Target. I also yinyang_pairmentioned the reason for that being that I would be discussing some of the other Target types since I had already shown various concepts in regards to the ring_buffer and event_file targets.

In this article I will be taking you into the world of the pair_matching target. The pair_matching target should connote that this target works by matching like events into a single pair. Very much like a pair of socks would contain two parts, one for each foot or side of the body, the pair_matching target creates pairs of events that go together. These paired events would typically correlate to one half of the equation – or in more simple terms, they are pairs because they are opposites just like the left shoe is a match to the right shoe.

Matching Events into Pairs

What if I have more than one pair of shoes though? Well, that is part of the equation with this target. Just like I need some sort of indicator that helps me match my shoes properly into pairs, I need to have some sort of indicator with the events in SQL Server, that are firing, that can help me figure out how they are related and should be in the same pair.

The equation to figure out which events are pairs is made available through the implementation of settings. To properly configure the pair_matching target requires: first, an understanding of what the possible configuration settings are, and second, an understanding of what it is that is to be paired.

To figure out what the possible configuration settings are, I will adapt a query from the previous article and use it here:

This would produce the following results:


The previous image are the results that I get for this Target in SQL Server 2014. I am going to start with the obvious question. It was so obvious that I marked the problem area with the seemingly suitable question marks. This is a pairing target, yet the begin_event is not a mandatory setting – at least not according to the metadata. At least the end_event is mandatory! That said, it should seem pretty basic that a begin_event is necessary as part of the equation. Otherwise, what would be matched to the end_event?

Of interest, to me,  in the results for the settings is the default settings values for the begin_event, end_event, respond_to_memory_pressure, and max_orphans settings. Despite the empty string for the begin_event, I believe it should also be NULL just like the end_event. It makes sense to me that the default value is NULL in this case because these should be defined in the session and should not have a default value.

The next setting of interest is the respond_to_memory_pressure setting. The default for this is false. This means that if there are sufficient unmatched events, it could cause memory pressure and potentially have an adverse effect. By enabling this setting, unpaired events could be removed from the buffer which could lead to a higher level of orphaned events.

Last up is the default setting for the max_orphans. This is another optimization that could be enabled to help reduce the memory effects of the session.  If this limit is reached, then unpaired events will start to be removed from the buffer in a FIFO fashion.

The remaining columns wouldn’t make sense to have default values. Nor does it make sense to be set to NULL since they are more or less optional. That said, the remaining configuration options are crucial to designing the formula to efficiently match the events. If a begin_matching_actions setting is added, then the end_matching_actions should also be set. I do, however, want to note right now that the names of these settings are plural. The reason for the setting name to be in plural form is that the configuration accepts a comma delimited list of columns or actions. Just remember to keep the columns and actions in the same order for both the begin and end configurations.

In Action

That was quite a mouthful there. Sometimes it is easier to show how this works with an example. To demonstrate, I want to work with what seems like the easiest example to fit the pair_matching model – query timeouts.

Here, I have the same session with points of interest highlighted.


I have aligned the like begin and end statements together so it is easier to see that these settings accept a comma delimited list and the items are listed in the same order. And, of course, the *_matching_actions settings have actions listed that have been attached to each of the Events that are to be paired. The *_matching_columns map to payload data points within the begin and end events. If using this particular setting, the columns of interest should exist in both events being compared.

These settings, with the proper attention to detail, can lead to an efficient and useful Session to match events. When attempting to throw this kind of session together without a little thought and prep-work, one could end up with a poorly performing session and potentially memory pressure issues (remember this is a memory target type with all the bliss that comes with a memory target).

Speaking of memory, you will need to remember to tune in for the next article where I will explore how to parse the event data for this session.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


Know Before You GO – Target Settings

targetdialsThe last few articles have covered various topics for the event_file and ring_buffer target. The topics range from basic queries in how to extract the session data on up through best practices. The fun doesn’t end there!

I will be covering more on the different types of targets over the next several articles. Before diving into the remaining Targets, it would be very beneficial to take a look at yet another core concept. Rather, it is time to return to a core concept and look at it from a different angle.

I would like to revisit the base principle around the Set Operations or configurable fields (recall that some settings are stored in the *_fields object). The remaining targets have a handful of settings to configure. Some of these settings are required while others should be required.

Know Before you GO Target Settings

When adding a Target to a session, a configurable setting may or may not be obviously available for use with the Target. In and of itself, this can be mildly frustrating if the wrong settings are tried with the wrong Target. More frustrating is that some settings are required. While the required setting may be logically deduced, it does not always work out that way.

This frustration can be mitigated through the use of the GUI to create the session. This introduces its own set of problems as was discussed previously. If the session is to be created through script, then the settings available to the Target won’t be presented in a nice friendly fashion like the GUI. So, what can one do?

The answer lies within the metadata for Extended Events. I have shown through several articles how to expose the metadata for working with Extended Events. Exposing the available settings for each of the Targets is not much different from the previous explorations into the metadata.

The first step is to recall that the Targets are objects within Extended Events. Building from that base, the metadata for the Target settings has to be stored somewhere. Clue in that the setting is stored as a Field (or Column) after the session is started, and I can safely presume that the settings would be exposed through the sys.dm_xe_object_columns DMV. With that in mind, my query would look like the following:

This would produce the following results:


A couple of notes on the results and the query are necessary now. By querying the metadata from script, a description of the setting can be exposed. This leads to a better understanding of the intent and purpose of the setting. Think about the occurrence_number field for a moment. Just going by the name, it would be difficult to understand what this setting actually does. As it turns out, it is a directive to the XE Engine to store only the specified number of event occurrences for each event. In addition to a better definition of the setting, I can expose which fields are mandatory. The data for the mandatory fields is contained within the capabilities_desc column.

While this is great and useful information, it does not resolve all of the hair tugging just yet. It could be presumed that since the setting exists it has (or might have) a default value. Especially taking into consideration that there are very few mandatory settings. The presumption on the defaults is quite accurate because there are default values for these settings. Looking through the DMV, there is no column that appears to be the source for these “defaults”. Unless of course you recall from prior articles that there is the column_value column which will hold the default value for these settings. With that, I can now adjust my query to also show me the default settings as follows:

This will show me a result set very similar to the prior results. The main difference being that I am now including the default value. I want to highlight a couple of these defaults:


Anybody familiar with SQL Server defaults should know that a default value of 0 should equate to “unlimited”. If you decide to use the ring_buffer target and do not specify a value for max_memory, you have just given free rain to the XE Engine to use as much memory as physically possible on the server.

Leaving the max_memory setting at its default happens to be dangerous enough that it could cause undue memory pressure and, in extreme cases, cause SQL Server to grind to a halt. This underscores the need to understand what the configuration options are and what they represent. The default values may be fine for many workloads, but really should be evaluated just as should be done with all defaults within SQL Server.

As I mentioned, this is an important topic to discuss at this point prior to more discussions on the different Targets. I will use this type of query throughout the next few articles to help illustrate the settings for the Targets as well as discuss how to get to the target data.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.


«page 2 of 100»

November 2015
« Oct    


Welcome , today is Monday, November 30, 2015