Alter Event Session

candycaneReady for a change of pace? Good! Today is a really good time to start looking into one of the more basic concepts related to Extended Event Session management.

Consider the occasional need to change a trace. Maybe you added the wrong event to the trace, or maybe you neglected to include the event in the first place. There is also the possibility that all of the events are correct but the actions or predicates may not be completely ideal for your trace session. What if it is just as simple as a missing target or adding another target to the session? All of these are potential changes to an XEvent session that might be needed.

Today, I will cover how easy it is to modify these sessions without dropping the entire session as many examples on the internet show. Each of the configuration changes just mentioned can be handled through some pretty simple scripts (and yes through the GUI as well).

Altered States

 

There is no getting around it. To demonstrate how to change event sessions, an event session must first exist. Let’s use the following as the demo session.

This session starts with a ring_buffer target. There is a single event in the session. This event has a few actions applied to it along with a predicate on the database_name field. I also have several of the settings for this session defined (whether at the custom field level or at the session level).

After deploying this session, I quickly realize that I flubbed it up a bit. I forgot a few things. Since those items are missing, I need to get them added without negatively impacting the already gathered data. Let’s start with the missing target. I don’t know how I missed it, but I completely forgot to add a file target to this session. The file target is a must have, so let’s get it added.

Phew, that problem has been easily rectified. Further review of the session shows the next problem. My manager requires that the dispatch latency be no more than 4 seconds and that the event memory not exceed 4090 kb. My manager is a little intense about some of these settings. To ensure I adhere to his requirements, I need to change my session settings now.

Good! We are rolling right along here. Fixed a couple of problems real quick with that session and I can continue on with other tasks for the day. Shortly after lunch the manager stops in and asks who changed various database settings. Looking into this session that was just deployed to try and capture some of those events, I get that sinking feeling when I realize that I completely missed that requirement. D’oh! That’s ok for now because I hadn’t turned off the default trace, but I better get the proper event added to the session.

The next time something changes with the database, it will now be captured. Sadly, too many things are crossing over the desk and I am trying to move too quickly. I forgot to enable the custom field to collect the database name, I better make that adjustment.

Nice, I have that adjustment made. Unfortunately this is where it does become a bit more complicated. Notice that I had to drop the event from the session first before making that change to the custom field setting for that event? This should be minimal in overall impact since it does not remove the already captured events from the session target. But it is an extra step that must be remembered when making a change that affects an Event within the session.

So far so good. I can capture the events that relate to a database change. I can figure out who made the change and the statement made for the change, right? Nope. I missed that during the setup. I guess I better add that in so I have better information for auditing the changes made to the database. I can start capturing that with the addition of the correct actions.

Everything is now settled in with that session. I go about my merry way for a couple of weeks. Then, one day, I find I need to review the logs to determine who has been changing database settings again. When looking at the log data I discover there are all sorts of object changes being logged to the event session log files. In review of the session definition I figure out the problem. I missed a predicate limiting the captured events to only those that are database type changes. I might as well get that added in.

This will now change the event session so the object_altered event will only capture “database” type events. These are the types of events that include settings changes at the database level.

Making changes to an event session without needing to drop and recreate the entire session is a pretty powerful tool. If I script that session as it is now configured, I would get the following:

Compare this to the starting session, and it is apparent that all of the changes implemented are now intact.

Altering an Extended Event session is not necessarily a difficult task, but it is a very useful ability. Having this basic skill in your wheelhouse can be very positive and useful for you!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

Waiting, is it a Bad Thing?

stockinghatDespite the desire to get away from the GUI talk in these articles about Extended Events, I have so far been unable to do it. Each article of late has something more to deal with the user interface. Let’s see what we can do with the GUI today.

One of the more useful troubleshooting tools (granted when used properly and not with a knee jerk approach) is waits. There are waits in SQL Server that are very specific to Extended Events. Not all waits are bad. Some are innocuous. But with a shoot from the hip approach, these waits can cause many DBAs to focus on the wrong thing.

In this article, I will show one particular wait for Extended Events. As a matter of fact, if you were paying attention to the last article, you will have already seen this wait in passing. To get a quick glimpse or to recall what was discussed, please read the article about the live stream target here.

Patience Padowan

The first thing I want to do is clear my wait stats. Here is a quicky on doing that. Understand that this clears out the wait stats and resets the counters to 0. If you track your waits on a regular basis, this may cause a raised eyebrow by your team-mates.

After clearing my waits, I can check for a baseline. When checking for this baseline it is important to note that I have nothing ready from an extended event target currently. I will start that after getting my baseline. Here is what my waits look like prior to working with the target data from any XEvent Session.

baseline

This is pretty vanilla prior to working with the targets. That is a good thing for now. This gives me a good sense that the baseline is a good starting point. Now, similar to what was shown in the live stream article previously mentioned, I am going to open a live stream viewer for the system_health session. At this point, you could wait for a minute or three and then re-query the waits. This additional step would be to help show that the XE wait has not yet introduced itself.

descendintostream

Perfect. Now I have a live stream viewer open for the system_health session. I have a good baseline. Now I just need to watch the viewer for a bit. I am doing this to ensure enough time has passed by that my waits have incremented. After a few events pop into the system_health session, I will re-query my waits.

xe_waitingabit

Look at how that wait has zoomed clear to the top! This wait is huge! This wait does not appear until the “Watch Live Data” option is being used to tap into the streaming target (really should be anything that is tapping into the live stream target via the GUI or via some other program). An example of “some other program” could be as simple as somebody querying the sys.fn_MSxe_read_event_stream function from management studio and trying to return the live stream data (as was discussed in the previously mentioned article).

Not understanding what causes the XE_LIVE_TARGET_TVF wait type can cause a data professional, or two, to chase their tail on something that may not be an issue overall. I have seen this happen on more than one occasion where somebody has spent hours trying to chase down the problem that this wait indicates. It doesn’t necessarily indicate a problem (unless you are a shoot from the hip gun-slinging troubleshooter type). It just means that the process accessing the live stream is waiting for more data to come through. That said, if this wait is high, maybe it is time to look into who might be tapping into the Live stream target.

Pretty straight forward and short today. I hope this helps avoid some time-waste for something that can be ignored most of the time.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

A day in the Stream

hohoEnough talk of the GUI with Extended Events (see previous articles in the series if you want to revisit the stickiness of the GUI – here). It is time for a bit of a diversion and something quick.

While it may be a quick traipse today, it won’t be the last article on the subject. It is merely an interesting bird walk into a corner of the Extended Events garden that is seldom visited. Today I will explore the little function called sys.fn_MSxe_read_event_stream.

This function, as the name implies, is used to read the event stream and is most commonly seen when watching the live stream of events for a session. I will show a little bit of that.

Gradually Descending into the Stream

First, let’s set the stage so you can confirm that I am not entirely off my rocker here.

descendintostream

By selecting “Watch Live Data” from the context menu after right clicking on the “system_health” session, I will be given a window just like the window discussed in the past few articles when discussing the GUI. After launching that window, a quick query to your favorite requests and sessions DMVs will reveal a little bit about this function that we are diving into today.

activestreamquery

If I click on the query text in that previous result set, I can see the following to be the query that is executing for the “live data” stream.

Cleaning it up a bit and giving some useful values to the parameters, I might have something like this:

Running that particular query from a management studio window would be rather pointless. It doesn’t ever return unless you cancel the query. The key to this one though is the second parameter. The second parameter tells us what kind of source we want to use for the stream of data. There are two values (that I have been able to find) that can be used: 0 and 1. A value of 0 pulls from the live stream target. A value of 1 pulls from the file target. If using a value of 1, then the first parameter needs to be adjusted accordingly as well. If the two parameters do not match, then an error is thrown.

As it happens, I have several log files in the default directory for the system_health and the HKEngine sessions that are deployed and running on my instance of SQL Server. Here is a sample of those files:

event_files

So far so good, right? Let’s go ahead and execute that second query (that reads from the file target) and see what happens:

stream_file1

This to me is far from useful as of yet. But there are a couple of rabbit holes to dig into from here. The first being the different types that we can see here. Let’s refashion the query to restrict the types being returned and see what happens:

stream_filesrabbit1

First, in blue we see that type 2 is exactly the same every single time. In my environment I have exactly four of that type. If I look on the file system, I can see that I have four files as well. This seems to indicate (not yet validated in my plucking at it) that this represents a file. Then type 1 is identical up until the point where I have it highlighted in orange. Just an interesting side note there is all.

If I perform the same thing for the HKEngine session, I see something like the following:

stream_filesrabbit2

Notice the difference here? Now I have two type 1 entries for each file that has been created. In addition, one of the type 1 entries is exactly the same for all files created.

But without being able to translate the data returned, this is really just a fun exercise into the live stream. That said, I give you one last interesting tidbit for today while leaving the second rabbit hole (translating the data into human readable text) for another time (besides just using the event file function to read it).

Recall from an earlier result that my spid for the live data viewer was spid 53. If run an output buffer for that spid, I will see something like the following:

outputbuffer

Not the friendliest of outputs to peruse and try to figure out, but it does give you a sense of what is going on there. If you know how to convert the data output from sys.fn_MSxe_read_event_stream, I would be interested in seeing what you do for it.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Hidden GUI Gems

frostedsnowflakeThe GUI for Extended Events offers some power to help you get started with trying to work with Extended Events. As I have shown over the past few articles, some of this power comes with a bit of cost and may in fact not be as powerful as just using a script. You can flip back through the previous articles via this link.

Today, I have a few gems that are available to you in the GUI. These gems should help solidify your understanding of the tools available through the GUI to help work with Extended Events. So far I haven’t hid my preference for using a script over the GUI. That said, the GUI can be useful for a thing or two. I will explain in better detail further into the article.

Playing in the Mud

To show these gems, a little setup is required first. So I am going to start with the following sample session (started from the GUI):

debugwhat

The setup thus far is rather simple, I have selected two optional settings – “start the event session immediately after session creation” and “track how events are related to one another.” The next thing to do within the GUI is to make my way through the events and select the events I need. The events I want to select need to help me track information related to the SQL process stopping (exiting) and any info related to when a memory dump is created.

To try and find the appropriate events, I will go to the Events tab, click the drop down shown in the green box and select “Event names and descriptions” from the menu.

filtering_eventselect

After making that selection, then I can type a keyword within the text box under “Event library”. Typing in the term “dump” will produce no results. This means either I have no events that will meet my requirements or I need to try a different term, right?

Well, just to verify the results I desire to confirm what I have seen by using a script. Executing the following script:

This confirms my suspicion. The script returns several results with the term “dump” when querying both the descriptions and event names just as I did with the GUI. There is an interesting development however. All of the results show as being in the debug channel.

filtering_eventselect_script

So why do these results not display from the GUI? Well that is one of the hidden gems. The debug channel is not shown in the GUI results by default. This will prevent any events from that channel from being displayed. This can be changed by checking the box displayed in the following image:

channelfilter

Consider here the name of the Channel for a moment. These events may be obfuscated from search results by default for a good reason. Some of these events can have a significant performance impact on the instance. That said, there is the occasional good reason to need to use the events from this channel in order to troubleshoot specific issues on the server. You just have to dig harder at it.

Having resolved that issue, I have resolved on using the stack_trace and sql_exit_invoked events. So I select both of the events and then move on to the Data Storage tab so I can configure the target. The target I have chosen to use is the event_file target. Now that I have selected the events and configured a target, I can slick the script button. I recommend always using the script button rather than committing the session direct from the GUI. Clicking the script button here will yield the next gem.

Xe_debugwhat

I have scripted the session I configured in the GUI. Overlaying the general page of the session with the produced script shows the gem. While the track_causality setting does get scripted, the option to start the session after creating the session does not properly script. This is an interesting problem. While this produces a small negative impact, it is one to bear in mind. If you need to have the session started, then make sure to manually start it or script the start of the session to confirm it has been done.

A third gem is one that I have not shown here but one to play with in your free time. If you change the name of the session and then script it, see what happens.

These are just a few more gems that I have shown in the GUI tools for Extended Events. I recommend using a script where possible (which is just about everywhere), and I have not hid that fact. While useful to a small degree, I do not recommend using the GUI for most tasks (again with a recommendation to use a script instead). Using a script has numerous benefits above and beyond the use of a GUI.

If for some reason you are not using Extended Events simply because you wanted to use a graphical interface, I recommend reconsidering. The series of posts I have on Extended Events has plenty of useful scripts that will provide the functionality and ease of use for Extended Events to mitigate that internal lust for the GUI.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Filtering Logged Data

grnsnowmnOver the course of the past couple of articles, I have shown some interesting if not useful functionality in the user interface for working with Extended Event log files. Those articles on merging files and customizing the view can be read here and here.

Today, I am going to explore how to further customize the view of the GUI for working with Extended Event Log files or XEL files. This further customization does include working with filters. Prior to proceeding, it would be beneficial to review the aforementioned articles – they do outline some building blocks for this article.

Sifting through it all

Even with a well tuned XE session, sometimes there is just a deluge of data. A deluge of data can make it more difficult from time to time to find exactly what is the most likely candidate to be the cause of the problem currently at hand (whatever that may be).

When dealing with a large quantity of data, it becomes necessary to filter that data down. Filtering it down will make it significantly easier to handle – when an appropriate filter is utilized. The GUI permits the implementation of filters in a couple of different ways to help with these dives into the XE logged data.

When convinced to use the GUI to peruse the log data, there are a few possibilities in how to create a filter (and yes it would be better to do it with a script). The first of these methods is actually quite simple. Let’s start with a grid display of the same session used in the previous couple of articles:

starter_grid

From here, if I right click one of the cells as shown below, I will be prompted with a context menu:

grid_filter

By right clicking a value in the grid, an option to “Filter by this value” will be given in the context menu. Selecting that option will open a new window allowing for further configuration of the filter or to just accept the filter as-is.

filter_window2

From the “Filters” window, you can see there is the option to set a time based filter, to create a filter on values for the fields within the trace log or to combine the two. This is pretty straight forward here. That said, recall that filters configured here can be saved in the viewsetting file discussed in the prior article (mentioned at the beginning of this article).

Further Analysis

After filtering down the results, what if you wish to now perform more complex analysis? Maybe the analysis is to be done for trending purposes, or maybe something else. Regardless of the purpose, the need is there and you wish to know how to do it (but for some reason have a serious allergen against doing the work via a script). Luckily there just happens to be a magic button within the GUI that helps to perform this task.

Within the GUI, we are given the opportunity to group and aggregate data. The aggregations cannot be performed without first performing a grouping (very similar to the requirements via script). The grouping can contain multiple fields or just a single field. Let’s take a quick peek.

groups_xe

Clicking on the Grouping button, the following window opens, permitting the configuration of the groups.

group_wind

This looks pretty familiar and standard. It is rather simple in design and function. Move columns from left to right to add to the grouping. To remove from the grouping, move from right to left.

aggregs_xe

With the group established, then comes the fun part to help with analysis. This is where aggregations comes into play. Within an aggregation; we can perform counts, max, min, avg and sum. Depending on the field, the aggregation that can be performed may be restricted. Clicking the aggregation button, one will see the following window:

agg_wind

Given the session being evaluated and the columns that have been chosen previously, I am restricted to the columns shown in the preceding image. Aggregations can only be performed on columns selected in the “Column Chooser”. The rest of the configuration of the aggregation is merely an exercise in plug and play to determine which fields or aggregation will serve your needs the best.

For this exercise and the chosen grouping, I am going to go with Average on the duration field and then sort the aggregation by that same field in a descending order. For your purposes, feel free to choose something more meaningful or appropriate to your needs.

Cautionary Tale

If there happens to be a rather large number of events within the session file(s) being evaluated, don’t expect this to work without a hitch or three. Trying to load a 1GB trace file (read that as a default size on an XE trace file) with over 600,000 events, SSMS regularly crashed on me while trying to apply a filter, a group, and even the aggregation. None of this has been an issue while trying to perform similar types of groupings and aggregations with large trace files via script.

If you are going to attempt to manipulate aggregations, groupings, filters etc. via the GUI with large trace files, understand that you may run into errors and crashes when using the graphical interface rather than a script. The choice is yours!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Customize the XEvent Log Display

lego_giftIn the previous article I showed a couple of methods to merge Extended Event Log files. In that article I mentioned the ability to customize the display in the GUI for the merged log files.

Today, I am going to explore how to configure the GUI with filters and different column sets or orders. To proceed, it will require a refresher course via the prior article. If you need a primer in how to merge the XEL files, or to open the XEL files from within the GUI, I recommend taking the time now to read that prior article.

Getting Sticky Again

This should be a rather quick foray into the world of the GUI and how to customize. Just bear in mind that it is a small building block.

Taking up from the prior article, I will continue to use the same sample session and sample data.

Let’s start with a screenshot where we left off the last time:

merge_gui_display

By itself, this is not the prettiest of layouts. Sure there is good information, but the layout is rather simple and leaves one wanting more. The “more” comes easily enough after some familiarity with the GUI. Suppose you desire to change the order of the columns presented in this view, or to even select different columns, there are two methods to achieve that goal. The first option would be to do as shown here:

method1_xe_gui_options

By right clicking an empty space in the title bar, a context menu will be made available. There are several options here that can be a bit fun to peruse. I will just focus on the “Choose Columns” option for now.

The alternative for this method is to look in the toolbar. There will be an XE specific toolbar available when looking at an XE log file. In this toolbar there is a “Choose Columns” button. This method will get us to the same end result as the first.

method2_xe_gui_options

Selecting the “Choose Columns” from either method just presented will open a new window such as the following:

choosecolumns

This kind of screen should be fairly familiar and easily discernible. One the left is a set of available columns that can be added in the display. On the right is the list of columns currently being displayed in tabular format. In between there is a set of buttons to add, remove, add all, or remove all columns. In the top right is a means of re-ordering the columns as presented in the display table. Pick a few and sort the columns to a suitable display mode and you are all set.

Recall from the image showing method one that there is an option to remove the column. This same function can be performed from the context menu or from this “Choose Columns” window.

For the order of the columns, there is another fancy means to do that as well other than through the “Choose Columns” window. Columns can be dragged and dropped from the initial screen to change the presentation order easily enough. Sometimes it is far easier to use the “Choose Columns” window to do that. Pick your poison here and run with it.

Super Stickey

This is fantastic if the manipulation of the display settings is a one-off adventure. What if you need to do this multiple times? Maybe, just maybe, this needs to be done for 100 servers. The process I have just shown can become exceedingly tedious. In addition, every time an XEL file is opened, it defaults back to the same settings I have already shown. What do you do in cases like that?

Beyond using a script (the recommended method by the way), there is an alternative.

displaysett_xe_gui_options

The alternative is found in the toolbar discussed earlier in this article. The option is “Display Settings.” After picking the columns and the order and getting the display dialed in just right, the view can be saved. The view settings will be saved as an XML file with the extension of viewsetting. This view can then be used to immediately apply to a freshly opened log file. Sure it is still an extra step or two to open that view and can still become a bit tedious, but it is far better than resetting the view every time. And a bonus is that the saved viewsetting file will also save merged columns, groupings, aggregations, column order, and filters defined in the view.

Then again, all of this is easily achieved through a tsql script which is far more scalable when dealing with multiple servers or the need to review the data more than once. The choice is yours when using XE.

Though I mentioned the creation of a filter for the displayed log data, I will not be delving into that topic today. It would be a very suitable exercise for the reader to figure out how to apply a filter through this tool. Or wait until the next time when that topic will be covered along with aggregations and groups. In the meantime, I recommend trying to figure it out with a little picking and plucking through the GUI-ness.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Log Files from Different Sources

Well, it has been quite a pause between articles in the series. It is well past time to continue the series. With such a delay in the series, it is prudent to remind all that there is a series on Extended Events I had done over the course of two months. The complete table of contents for this series can be found here. yule logTruth be told, I had intended to start this series again as my usual 12 days of Christmas series. Sadly, I am a bit behind on that series. Maybe some of there will be multiple posts a day to help catch up.

As was the case throughout the course of the series, I intend over the next few articles to continue to produce building blocks that may be helpful for later articles, or concepts. With that in mind, the chosen topic today is very basic in nature. While basic, it is a rather important topic.

While working with Extended Events, or with any sort of logging or tracing utility, it becomes critical to consume the data from disparate sources or even to consume the data from a source server on a different server. The question becomes: How to consume that data?

Getting Sticky

When it comes to working with log files from different sources, or most tasks within SQL Server for that matter, there are two general methods. The first method is of the type frequently frowned upon by the more senior of database professionals, sysadmins and so forth. This first method is to use the graphical user interface or GUI. This is more of a point and click method. This is also a method that I will generally shy away from for most tasks. That said, this method is easily performed through the GUI and is reasonably safe.

The first step is illustrated with the following image:

XE_merge

From within Management Studio, navigate the file menu as shown in the preceding image. After clicking on “Merge Extended Event Files”, a dialog will open as shown (in the background) of the next image:

XE_merge_opendialog

Clicking add from the “Merge Extended Event Files” window will open a dialog that permits you to navigate to the location of the XEL files that need to be opened and merged. Once located, a single file or multiple files can be selected. One should proceed carefully here to ensure against opening too many files or files that may be too large. Opening too many files or files that are too large can cause various unwanted effects.

Once the file(s) is(are) open, a grid like window will be open in SSMS that looks like the following:

merge_gui_display

From here, one can manipulate the display to something more suitable to his/her display preferences (e.g. filtering or column layout). I will not cover that here but will be saving it (the display customization tutorial) for another time. That aside, I have just shown how easy it is to look into an Extended Event Log file through the use of the GUI.

Not So Sticky

For the less sticky  (less GUI) method, I have shown a similar routine in the past that lays the groundwork for how to do this via script. You can read a little about that here. Building on that method, there is a tweak to be made – we must specifically name the path to the XEL files to be merged rather than try to build it dynamically.

Using the same session files as shown in the GUI examples, I will show how to proceed with the merge via script.

This script lays out a rather simple process. That process being to load the log data into a table (in a database of your choosing) and then once loaded, query the data to better understand what has been captured. In the preceding script I left a few notes. It is important to understand that if trying to pull in the logs from a SQL 2008 or R2 instance then the XEM file must be included (as I have done for this particular example).

To parse the data into a friendly format that I can use for analysis, I would do something like the following:

From here it is really easy to add/remove columns or re-order the columns into a more friendly format for the person reviewing the data. This is incredibly easy – especially given the appropriate recipe / script / process. I would also venture that this method will require fewer resources and lead to less chance of error.

Enjoy working with evaluating and merging these XEL files from different sources. It should be fun!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Tricks and Treats with XE

Halloween is a great time of year. It is unfortunate that it is just one day of the year. That said, I do like to think of the phantasmripmonth of October as Halloween Month. I have several posts over the years that geek out over the cross-over between Halloween and SQL Server.

With the proximity to Halloween, it’s not hard (knowing me) to figure out why I originally decided to terminate this series on the Eve of Halloween day. Then again, as it turns out, I decided to bring the series back to life on the other side of Halloween. In short, it was to be killed on Halloween Eve and then implicitly converted to some living dead creature. You pick whether it is a zombie (which is far better suited to SQL Server) or a vampire.

If you are interested in the previous Halloween posts, here is a list of a few of them:

XE Related

Ghosts in your Database I

Ghosts in your Database II

Ghosts – an eXtrasensory Experience

DB and Fun Related

All Halloween posts

That list is my Halloween treat this year. Now for the trick with a very strong warning. Because of this warning, I am not posting any code showing how to perform the trick.

Warning

DO NOT ATTEMPT THIS ON ANY PRODUCTION SERVER. I BEAR NO RESPONSIBILITY FOR ANY PRODUCTION SERVER CHANGES DUE TO THE READING OF THIS ARTICLE (OR ANY OTHER ARTICLE I HAVE WRITTEN). SOLE RESPONSIBILITY OF CHANGES OR ACTIONS TAKEN BELONG TO THE PERSON PERFORMING THE ACT OR CHANGE. IN OTHER WORDS, YOU ARE SOLELY RESPONSIBLE FOR BREAKING YOUR PRODUCTION ENVIRONMENT IF YOU IMPLEMENT THIS IN PRODUCTION!!!

I have debated seriously over even publishing this “trick” for Halloween because of how deliciously evil it is. I will try and paint the picture in broad strokes. I will leave it up to you to connect the dots.

Problem

A third party vendor has provided an application along with some database requirements. Among these requirements is that the application account must use the ‘sa’ login. You have a strict policy that ‘sa’ must never be used for any applications or by the end-users. This is an extremely protected account by policy. The dilemma you have been presented is that the CEO insists that this application must be used (never happens right?) and the vendor insists the application will not function properly without the use of ‘sa’ (I can hear you chortle at that).

Now you, as the DBA, are stuck between a rock and an even harder place. Being smart (and somewhat smart and sadistic), you insist that the use of the ‘sa’ login should not be performed because it will break SQL Server (in reality we know the login does not break SQL Server, but something inevitably will break due to a mis-step by the person using it a) when they shouldn’t, and b) because they lack proper training). Inside you are grinning from ear to ear because you have some devilish ideas, some of which you learned by attending a session by Rob Volk (twitter) where he shows some devilish methods to deter the use of ‘sa’.

For the sake of the scenario, let’s just say you have a policy preventing the implementation of logon triggers (as suggested by Rob) due to a couple of mishaps a few months back. Somebody implemented a logon trigger that wasn’t properly configured and it happened to prevent all users from accessing the server (including the dba’s). Ruh roh!

And then…

Later in the week, after reading about the power of Extended Events, you decide to play around and do a little discovery. You come across this blog post that shows you how to find all of the possible events within the XEvent Engine. So you run the script that you found and shown here:

And while looking through the events and descriptions you discover that there is an event to log all of the successful logins. Not thinking anything of the third party vendor (because it just makes your blood boil) you begin to dream of the potential for auditing all successful logins (established connections) for all users and documenting who may be doing what and when.

After taking a few notes about the potential for the login event and getting access and logins under better control, you continue along your journey through Extended Events by reading this article about Actions. Feeling comfortable enough from the article, you do what any good data professional, trying to learn a new topic, would do – you start exploring using the provided query:

While looking through the available actions, you see one in particular that causes you to mangledraise an eyebrow (not both, just one). There is an action called “debug_break”. You squirm and ponder for quite a bit at the name and definition of this particular object. You wonder out loud “why would anybody ever want that in there?”

Your co-worker interrupts with “Do what?”

To which you reply “Oh nothing important. It was just something I read.” After which you dismiss it, realize the time of day, pack up and head home for the evening. Hours later after the work day has long since been finished, you awaken in a cold sweat with an “Aha!” that startles your slumbering neighbors dog. Immediately you pull out your laptop, connect to your local sandbox instance and get to work with a wry smile and devious giggle.

Upon returning to work the next day, you call the vendor and work out a solution to build them a sandbox server to provide a proof of concept. You grant them the use of sa to use for their application login. Sitting down together and connected to the new sandbox, the vendor attempts to login and the following happens:

DEBUG_broke

Slack-jawed and speechless the vendor pales in complexion. Deep down inside you giggle like an elementary grade school girl – content with yourself. BATTLE WON!

After the vendor departs with a promise to rework the security requirements, you restart the service on the sandbox and go about the rest of your work week with a silly grin from ear to ear and nobody the wiser.

That concludes the trick. In the tradition of Halloween, you knocked on my virtual door of Extended Events and I, in accordance with the traditions of Halloween, gave you a trick and a treat.

I hope this helps you to understand the power of Extended Events. Thinking through a problem and reaching a solution is what XEvents can help to provide. I hope this Halloween XE article was enjoyable to you. Stay tuned as the XE 60 Day series goes full Zombie starting next week.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Default Sessions

defaultsSo I have blasted through a lot of technical articles about Extended Events. There is a lot of groundwork that has been covered with layers of building blocks that have been stacked. Now it’s time to take a step back as I near the end of the original goal for this series.

I say original goal, but that really only means the goal pertaining to the length of the series. I had a goal for content that has yet to be reached. Due to that, I have pushed the series into overtime. So, really, that means this is merely a step back to a lighter topic today as I jump into half-time (or something like that) of the series.

Default Sessions

In the previous article I mentioned some of the default sessions for Azure SQL Database. I will not be going into any further detail about those sessions at this time. Discussion on those default sessions will have to be reserved for a later date.

The focus of this article will be to introduce the default sessions deployed with SQL Server. Some of these are internal and “private” while others are more “public” and like a user defined session.

system_health

The system_health default session is the most common of all of the default sessions. This session is akin to a true black box recorder in SQL Server. There is a ton of information that can be trapped via this event session. One caveat to this session is that if you are looking at the session in SQL Server 2008/R2 then the session really should be modified to include a file target.

sp_server_diagnostics session

The name of this session almost gives it away. This is an internal private session that executes a stored procedure by the same name: sp_server_diagnostics. Since it is a private internal session, I do not have all of the details on the internals. But since it is related to the sp_server_diagnostics stored procedure, it isn’t hard to make the leap at what it does. In addition, there are some events with a similar name which helps to better understand what is happening here.

The sp_server_diagnostics stored procedure captures diagnostic and health information about SQL Server to help alert to potential failures. This procedure runs continuously in repeat mode. Here are some of the results from my laptop for this procedure:

sp_server_diagnostics

Notice there are five different domains of interest in this result set. If I look at the events by the same namesake, I will see the following possible data points:

sp_server_diagnostics_result_xe

Wait, there’s more! Diving in deeper into the event, I can see the following possible component_names / domains for the maps in this event:

sp_server_diagnostics_result_xemap

In this last result set, there are multiple data points that map back directly to the stored procedure. In addition to the direct mapping, this result demonstrates the additional domains of “AGS” and “LAST”. I don’t have any AGS created on this machine so no stats would be recorded for that. I don’t know what “LAST” represents. Then the sp_server_diagnostics_state maps directly to the “state” seen in the stored procedure.

Having this same information divulged via extended events doesn’t end here. The system_health session collects the sp_server_diagnostics_component_result event by default. This all ties together back to the “sp_server_diagnostics session” which appears to be the manifestation of the “repeat mode” of the sp_server_diagnostics extended stored procedure. The proc is continuously running and gathering data for the event to trap the information and subsequently send it along to the event session targets.

hkenginexesession

The internals of this session are the hardest to track down. This is an internal private session. The name of the session does reveal a fair amount of information about the session that can be used to make decent educated guesses at what is being done here. First this session is used for monitoring In-memory OLTP / hekaton / hk (for short) tables and natively compiled stored procedures.

Taking the knowledge that this is used to monitor the “hekaton” engine, one can also deduce that the possible list of events comes from the following (probably others involved too):

hekaton_events

A more complete list of events that seem to fit for this session can be retrieved via the following query:

The final piece of interesting information is that this session appears to be tied to the following dll: hkengine.dll. This is a dll that is loaded in SQL Server and it does bear the same naming convention. Here is another query that will show if this dll is loaded:

This session is only available as of SQL Server 2014.

AlwaysOn_health

The AlwaysOn_health session is very much like the system_health default session. This is a default session that can be scripted from within SQL Server Management Studio. Think of this session as a black box recorder for your Availability Groups. This session will capture events related to the state of the AG replica, if there is a state change, any errors that occur, and failovers that occur.

To get a grasp of what this particular session does (try and figure out a query to accomplish this before proceeding), one can run the following query:

And the results would be:

alwayson_events

Bonus points if you recognized this could be done based on the foundations pieces laid out in previous articles.

This session became available as of SQL Server 2012.

The default sessions in SQL Server cover a lot of points of interest when looking to monitor a database environment. Between these sessions, this is quite a significant black box recorder for SQL Server. Despite the quantity of events that these sessions combine to monitor as your black box recorder, it doesn’t mean that you can’t add more events to monitor. That is the power of Extended Events. It is also important to remember that these sessions are default sessions. What do we like to do with most defaults? That’s right, we tweak it to be more specific to our own environments.

I hope you enjoyed the article, even without the dive into the default sessions of Azure SQL Database. Stay tuned because those sessions may be covered in some future article.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Azure SQL Database

So far I have strictly focused on Extended Events for SQL Server installations. This means I have db_cloud_greenbeen looking at Extended Events from the “physical” or on-premises perspective. Installing SQL Server into an Azure VM falls into this category as well.

What hasn’t received any TLC is Extended Events from the Azure SQL Database perspective. You may be wondering why it matters since I just said that Extended Events is the same in an SQL Server in an Azure VM as it is on-premises. That is Azure and that is the cloud right?

Well, not entirely. Azure SQL Database works quite a bit differently. Azure SQL Database introduces some things into Extended Events that basically turn your XEvent world upside down. Sort of. If you understand that Azure SQL Database is a contained database (in concept), then some of this world flipping I mentioned is merely a tilt and not terribly earth shattering. That said, it does require at least a little bit of attention. Today, I will merely introduce some of the differences and then leave the internals for a later time.

Azure SQL Database

Metadata

Azure SQL Database uses some of the same dynamic management views(DMVs) and catalog views as SQL Server. I covered those DMVs and catalog views in previous articles here and here – respectively. For the most part, Azure SQL Database uses a different set of views though. Some of the views of interest for querying metadata are listed in the following tables:

Running Sessions

DMV Description
sys.dm_xe_database_session_event_actions Info about actions within active event sessions
sys.dm_xe_database_session_events Info about events within active event sessions
sys.dm_xe_database_session_object_columns Info about event payload for events in an event session
sys.dm_xe_database_session_targets Info about targets within active event sessions
sys.dm_xe_database_sessions Returns a row for each active database scoped event session.

Looking at the names in the list they should be pretty familiar at this point. The real difference is that these are scoped to the database whereas SQL Server is not scoped in the same manner. This is an essential distinction due to the way Azure SQL Database works.

Deployed Sessions

Catalog View Description
sys.database_event_session_actions Info about actions within deployed event sessions
sys.database_event_session_events Info about events within deployed event sessions
sys.database_event_session_fields Returns a row for each customize-able column that was explicitly set on events and targets.
sys.database_event_session_targets Info about targets within deployed event sessions
sys.database_event_sessions Returns a row for each event session in the SQL Database database.

These particular catalog views should also seem somewhat familiar. The major difference in name being that they are scoped to the database instead of server just like the DMVs.

XE Internals Metadata

sys.dm_xe_map_values Returns mappings of internal keys to text
sys.dm_xe_object_columns Metadata for object data definitions
sys.dm_xe_objects Metadata for different available objects within XE Engine
sys.dm_xe_packages Listing of packages registered within XE Engine

And lastly, these are all the same between SQL Server and Azure SQL Database. If you have been following along with the series, these views really should come as no big surprise and there should be a base familiarity with them. That familiarity is part of the reasoning for skipping a deep dive at the moment.

Storage

Azure SQL Database still give the possibility of storing event data in multiple different types of targets. Not all standard public targets are available for user defined sessions. One can currently use the ring_buffer, event_counter, and event_file targets. This provides for two different memory type of targets and one file target. The file target does not come without a cost though. To use the file target does require access to an Azure Storage Container and will take some extra effort to properly configure.

Default Sessions

Azure SQL Database also comes with a set of default sessions automatically configured. Not all are running just like in SQL Server, but they are deployed. Even though several sessions are deployed by default, there is a limitation to how many can be running at a time (subject to change). The default sessions are (subject to change):

  • azure_xe_activity_tracking
  • azure_xe_errors_warnings
  • azure_xe_object_ddl
  • azure_xe_post_query_detail
  • azure_xe_pre_query_detail
  • azure_xe_query
  • azure_xe_query_batch
  • azure_xe_query_detail
  • azure_xe_query_execution
  • azure_xe_query_waits
  • azure_xe_query_waits_detail
  • azure_xe_waits

Consistency

Despite these differences, there is a fair amount of consistency. Discovery of objects and metadata still remains the same. Being able to query the metadata for running or deployed sessions is also in line with the previous articles I have written in this series. In short, despite the slim differences (subject to change) and the platform differences between SQL Server and Azure SQL Database, there are still consistencies and common building block practices that I have shown still apply.

Scoping

Even though there is a great amount of consistency throughout the extended events engine and between these two disparate platforms (SQL Server and Azure SQL Database), there is one glaring difference that must be discussed. I showed that the catalog views and DMVs are scoped to the database instead of server. Guess what that means for creating the actual session? You got it! An Extended Event Session is scoped/created at the database level rather than the server level. This is done easily as follows:

As I have said, I am not going to go deep into the details and inner working of Extended Events in Azure SQL Database at the moment. I will save those deeper dives for a later time. What I have done at this point is to introduce the basic differences and introduce the realm of XE in Azure SQL Database to you. Don’t be afraid to use Azure SQL Database based off a previous limitation around Extended Events. Things are changing in the cloud and they are changing at a rapid rate.

 

I hope you enjoyed the article.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

«page 1 of 9






Calendar
February 2016
M T W T F S S
« Jan    
1234567
891011121314
15161718192021
22232425262728
29  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, February 9, 2016