SQL Agent Job Schedules

Many moons ago, I posted a script to help report on the human friendly scheduling sched_reportinformation for SSRS subscriptions. You can read all about that here.

In a very similar fashion, today I want to share how easy it is to turn the internal schedules for SQL Agent jobs into something that is more human friendly. SQL Agent job schedules should be easy to read for humans if for nothing else than the fact that you have to provide this information to the auditors from time to time. We all know they can’t properly interpret the numeric job schedules and properly determine what they mean.

The job schedules work quite well with the current design – which is not human friendly. The use of numbers is a streamlined approach and far more efficient. It may be a bit more useful to us if there were some sort of translation table that was stored. A stored translation table would make it terribly easy to figure out what is happening within the schedules for the SQL Agent jobs. We don’t have that, so we often have to do a little trial and error. Or we rely on people that have done that trial and error – with a little verification on our part. Afterall, these scripts are posted on the internet.

SQL Agent Job Schedules

I am not going to break down the script. With the similarity to the aforementioned SSRS script, I would recommend looking there for a little bit of an explanation. Here is the script to retrieve SQL Agent Job Schedules from your MSDB instance.

This script will only retrieve the SQL Agent Job schedules that are not SSRS related and that are not SSIS maintenance package related. The SSRS stuff has already been introduced, and a quick eval of those blog posts will show the differences between grabbing SSRS information vs. grabbing just SQL Agent Jobs.

Stay tuned. I will post a script for the SSIS related scheduling information.

Now, go forth and audit your agent jobs. Document the schedules in an easy to read format. Give that report / document to the auditors and your manager and look like a rock star for the day!

Index Cannot Be Reorganized…

sunburst_spaceWorking diligently as any good DBA might, you have established maintenance routines for each of the SQL Servers under your purview.

As a part of this maintenance you have scripted solutions to intelligently manage and maintain the fragmentation levels for each of the indexes within each database on each instance.

To further show how diligent you are as a DBA, the outcomes of each maintenance run are logged and you review the logs each morning. This routine helps keep you on top of everything that is happening within the environment.

For months, maybe even years, things are running smoothly. Never a failure. Never an error. Just routine log review day after day. Then one day it happens – there is an error. The index maintenance script failed one night.

Index Cannot be Reorganized…

You receive the error message similar to the following:

Msg 2552, Level 16, State 1, Line 1 The index “blah” (partition 1) on table “blah_blah_blah” cannot be reorganized because page level locking is disabled

Immediately, you start double-checking yourself and verifying that it worked the previous night. You even go so far as to confirm that the same index was previously reorganized. How is it possible that it is failing now on this index. What has changed? Has something changed?

Time for a little digging and investigating, so the dirty work begins. On one side of the coin you are relieved to be able to do something different. On the other side of that coin, you are rather annoyed that something seems to have changed. These feelings are perfectly normal!

First things first – you investigate the indexes in question to confirm what the error is saying. This is easily done with a query such as the following:

Scrolling through the results, you notice (eventually) that the IX_SpecialOfferProduct_ProductID in the AdventureWorks2014 database has page locks disabled. You are absolutely certain that this index was created allowing page locks. Pondering the problem for a moment, you recall having read about the default trace (there are several articles on the default trace – here) and the thought occurs to try and see if there is a breadcrumb there about the change. Using the query from that default trace article, a picture starts to unscramble. Here is that query reposted and a snippet of the results:

index_deftrace_audit

This is a great start. Not seen in the results is the timestamp showing when it was done – which was due solely to snipping. Also not shown is the text of the statement that was run for those three events. So close, yet so far away. This is not quite enough to have the smoking gun evidence to show Jason (me) that I did something wrong and unauthorized. What to do now?

All is not lost yet! Your stunning memory kicks in and you recall several articles about using Extended Events to audit server and database changes. Better yet, you recall that you deployed an XE session to the server where this error occurred. And yes, you are slightly embarrassed that you failed to remove the XE session after fiddling with it. We won’t tell anybody that you deployed a test XE session to a production server as if it were your sandbox. The session currently deployed is trapping all object changes unlike the following session that has it filtered down to just objects that are indexes.

You query the trace file with a query like this:

Wow! Look at the results! There is a smoking gun finally.

index_xe_audit

Two results happen to pin the root cause of the change squarely on my shoulders. Yup, Jason changed that index to disallow page locks. I highlighted three different areas of interest in the results. The yellow and green indicate the DDL phase. One row for the start of the statement, and another row for the commit of that statement. The red highlight shows me the exact change that was made to this index. This is all very good info!

What Now?

It really is great to have the smoking gun. If something is broke and it worked previously, it is essential to find the root cause. With a root cause under the belt, what needs to be done to fix the failure? That is a little bit easier that finding the root cause. That is, unless there is a technical reason for the index to no longer allow page locks (maybe that smoking gun is less of a smoking gun and more like baby spittle after all).

Here is how you fix the problem:

But, But, But…

The Extended Events session would be very noisy and capture every alter index statement, right? It should capture statements like the following, right?

The answer to that question is: Yes, Yes, Yes. These statements are all captured due to the use of the ALTER statement. Here are the results of the XE session with all of these scripts having been executed:

index_xe_audit2

If you want to audit when the indexes are changing and how they are changing, this will do just the trick. If there are a ton of changes, then be prepared for a deluge of information.

The Virginia Tour 2016

db nsfwLong Time Coming

With less than a week before the Richmond SQLSaturday event, I am finally getting this post done – maybe.

This is not just a one day event for me. No no! I will be presenting for three consecutive days on many things SQL Server related. The Virginia Tour 2016 really is more of a Richmond Tour 2016 (March 16-19) with all three days being presentations in the Richmond area.

The tour starts off with a visit to the Richmond User Group. You can find details on the evening and event on their website – here.

Here is a quick recap of the NSFW (Naked Sql: Fundamentals to Wisdom) session:

Ever feel like there are just too many layers with SQL Server? You peel off a layer and behold there is yet another layer. SQL Server is rather complex. The complexities can be rather frustrating from time to time.

Join this NSFW (not safe for whining) session as we strip SQL down to the bare bones. I will help you undress the complexities of a few of the basic SQL Server concepts. These concepts will include locks, principals, query directives and deadlocks.

In this session I will disrobe misunderstandings related to the aforementioned topics. I will show the different types of principals in SQL Server. I will also show various scenarios involving locks and deadlocks along with some of their effects with and without query directives. The goal is to better understand these topics and to hopefully stop using and perpetuating common mis-uses of terminology and common misconceptions about the technology.

This should be a light-hearted and fun session with a bit of NSFW learning. Just hope the censors don’t get to the slide deck first!

After the user group presentation to kick off the week, there will be an all-day encore on Friday. Friday is officially a part of the SQLSaturday event. You are welcome to come and participate in my all day workshop – Monitoring SQL Server with Extended Events. There is an event page with details for registering that you can visit – here!

Here is some of what I will be covering in day 2 of this Richmond tour:

Every good enterprising DBA has the prime directive of monitoring the SQL Servers under his/her jurisdiction. The last thing a DBA wants to hear is that there is a problem with a database under his/her purview from an end-user.

SQL Server has progressed over time to give better and better information to the DBA.  Much of this information can be used in a home grown solution to monitor SQL Server. With the constant increase in surface area from Extended Events, the power to build a home grown solution becomes more and more feasible while being limited only by the imagination of the enterprising DBA (or developer or data professional). This full day workshop will introduce you to an Extended Events based solution to achieve that home-grown monitoring goal.

This full day session on Extended Events will help prepare you to put this tool to immediate use as you walk back to your daily duties.  This workshop will teach you about Extended Events starting with the basics and moving through how to create XE sessions that will get the right data for you, while doing so with minimal impact.

You will be exposed to advanced troubleshooting techniques as we work through complex issues that are made easier through the use of XE.  Take advantage of this opportunity to dive into the world of Extended Events and learn how you can make best use of this tool in your SQL 2008+ environment.

This will be a fun day if for nothing else than it is about Extended Events and how to glean great information from your SQL Servers. That and we will keep it light. Just remember, the more you are involved in the workshop the better it will be!

sqlsat486webAfter a brief break to grab some zzzz’s, I will be wrapping up this tour with the grand finale – SQL Saturday 486!!

Not only will I be presenting at this event, I have also volunteered.

I will have a short introductory style presentation about Extended Events. I intend to show a little profiler and a little Extended Events. As usual, I intend to keep the session light, fun and engaging. Here is a little more info on that session:

Extended Events were introduced in SQL Server 2008. With SQL 2014, we have seen a significant upgrade to this feature. Join me for a little adventure into the realm of extended events. We will discuss how to use extended events to aid in performance tuning and in day to day administration. We will also explore some background and the architecture of extended events. This is a fast paced session and will be a lot of fun. Attendees will be able to take back several sample XE sessions to help troubleshoot different issues and get to more intimately know their data/environment.

wheelofdoomThen to cap the day, I will be helping to assist with the Wheel of Doom.

The name of the session sounds much worse than it really is. The premise is simple. Speakers and attendees are invited to submit a lightening talk style session with slide deck. Then attendees of the session can volunteer to spin the wheel and present whichever session pops up.

This session is to help people get over the nerve of presenting. It will be very light-hearted and entertaining. No judgement, no shaming, just have fun with it!

 

 

Alter Event Session

candycaneReady for a change of pace? Good! Today is a really good time to start looking into one of the more basic concepts related to Extended Event Session management.

Consider the occasional need to change a trace. Maybe you added the wrong event to the trace, or maybe you neglected to include the event in the first place. There is also the possibility that all of the events are correct but the actions or predicates may not be completely ideal for your trace session. What if it is just as simple as a missing target or adding another target to the session? All of these are potential changes to an XEvent session that might be needed.

Today, I will cover how easy it is to modify these sessions without dropping the entire session as many examples on the internet show. Each of the configuration changes just mentioned can be handled through some pretty simple scripts (and yes through the GUI as well).

Altered States

 

There is no getting around it. To demonstrate how to change event sessions, an event session must first exist. Let’s use the following as the demo session.

This session starts with a ring_buffer target. There is a single event in the session. This event has a few actions applied to it along with a predicate on the database_name field. I also have several of the settings for this session defined (whether at the custom field level or at the session level).

After deploying this session, I quickly realize that I flubbed it up a bit. I forgot a few things. Since those items are missing, I need to get them added without negatively impacting the already gathered data. Let’s start with the missing target. I don’t know how I missed it, but I completely forgot to add a file target to this session. The file target is a must have, so let’s get it added.

Phew, that problem has been easily rectified. Further review of the session shows the next problem. My manager requires that the dispatch latency be no more than 4 seconds and that the event memory not exceed 4090 kb. My manager is a little intense about some of these settings. To ensure I adhere to his requirements, I need to change my session settings now.

Good! We are rolling right along here. Fixed a couple of problems real quick with that session and I can continue on with other tasks for the day. Shortly after lunch the manager stops in and asks who changed various database settings. Looking into this session that was just deployed to try and capture some of those events, I get that sinking feeling when I realize that I completely missed that requirement. D’oh! That’s ok for now because I hadn’t turned off the default trace, but I better get the proper event added to the session.

The next time something changes with the database, it will now be captured. Sadly, too many things are crossing over the desk and I am trying to move too quickly. I forgot to enable the custom field to collect the database name, I better make that adjustment.

Nice, I have that adjustment made. Unfortunately this is where it does become a bit more complicated. Notice that I had to drop the event from the session first before making that change to the custom field setting for that event? This should be minimal in overall impact since it does not remove the already captured events from the session target. But it is an extra step that must be remembered when making a change that affects an Event within the session.

So far so good. I can capture the events that relate to a database change. I can figure out who made the change and the statement made for the change, right? Nope. I missed that during the setup. I guess I better add that in so I have better information for auditing the changes made to the database. I can start capturing that with the addition of the correct actions.

Everything is now settled in with that session. I go about my merry way for a couple of weeks. Then, one day, I find I need to review the logs to determine who has been changing database settings again. When looking at the log data I discover there are all sorts of object changes being logged to the event session log files. In review of the session definition I figure out the problem. I missed a predicate limiting the captured events to only those that are database type changes. I might as well get that added in.

This will now change the event session so the object_altered event will only capture “database” type events. These are the types of events that include settings changes at the database level.

Making changes to an event session without needing to drop and recreate the entire session is a pretty powerful tool. If I script that session as it is now configured, I would get the following:

Compare this to the starting session, and it is apparent that all of the changes implemented are now intact.

Altering an Extended Event session is not necessarily a difficult task, but it is a very useful ability. Having this basic skill in your wheelhouse can be very positive and useful for you!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

Waiting, is it a Bad Thing?

stockinghatDespite the desire to get away from the GUI talk in these articles about Extended Events, I have so far been unable to do it. Each article of late has something more to deal with the user interface. Let’s see what we can do with the GUI today.

One of the more useful troubleshooting tools (granted when used properly and not with a knee jerk approach) is waits. There are waits in SQL Server that are very specific to Extended Events. Not all waits are bad. Some are innocuous. But with a shoot from the hip approach, these waits can cause many DBAs to focus on the wrong thing.

In this article, I will show one particular wait for Extended Events. As a matter of fact, if you were paying attention to the last article, you will have already seen this wait in passing. To get a quick glimpse or to recall what was discussed, please read the article about the live stream target here.

Patience Padowan

The first thing I want to do is clear my wait stats. Here is a quicky on doing that. Understand that this clears out the wait stats and resets the counters to 0. If you track your waits on a regular basis, this may cause a raised eyebrow by your team-mates.

After clearing my waits, I can check for a baseline. When checking for this baseline it is important to note that I have nothing ready from an extended event target currently. I will start that after getting my baseline. Here is what my waits look like prior to working with the target data from any XEvent Session.

baseline

This is pretty vanilla prior to working with the targets. That is a good thing for now. This gives me a good sense that the baseline is a good starting point. Now, similar to what was shown in the live stream article previously mentioned, I am going to open a live stream viewer for the system_health session. At this point, you could wait for a minute or three and then re-query the waits. This additional step would be to help show that the XE wait has not yet introduced itself.

descendintostream

Perfect. Now I have a live stream viewer open for the system_health session. I have a good baseline. Now I just need to watch the viewer for a bit. I am doing this to ensure enough time has passed by that my waits have incremented. After a few events pop into the system_health session, I will re-query my waits.

xe_waitingabit

Look at how that wait has zoomed clear to the top! This wait is huge! This wait does not appear until the “Watch Live Data” option is being used to tap into the streaming target (really should be anything that is tapping into the live stream target via the GUI or via some other program). An example of “some other program” could be as simple as somebody querying the sys.fn_MSxe_read_event_stream function from management studio and trying to return the live stream data (as was discussed in the previously mentioned article).

Not understanding what causes the XE_LIVE_TARGET_TVF wait type can cause a data professional, or two, to chase their tail on something that may not be an issue overall. I have seen this happen on more than one occasion where somebody has spent hours trying to chase down the problem that this wait indicates. It doesn’t necessarily indicate a problem (unless you are a shoot from the hip gun-slinging troubleshooter type). It just means that the process accessing the live stream is waiting for more data to come through. That said, if this wait is high, maybe it is time to look into who might be tapping into the Live stream target.

Pretty straight forward and short today. I hope this helps avoid some time-waste for something that can be ignored most of the time.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

A day in the Stream

hohoEnough talk of the GUI with Extended Events (see previous articles in the series if you want to revisit the stickiness of the GUI – here). It is time for a bit of a diversion and something quick.

While it may be a quick traipse today, it won’t be the last article on the subject. It is merely an interesting bird walk into a corner of the Extended Events garden that is seldom visited. Today I will explore the little function called sys.fn_MSxe_read_event_stream.

This function, as the name implies, is used to read the event stream and is most commonly seen when watching the live stream of events for a session. I will show a little bit of that.

Gradually Descending into the Stream

First, let’s set the stage so you can confirm that I am not entirely off my rocker here.

descendintostream

By selecting “Watch Live Data” from the context menu after right clicking on the “system_health” session, I will be given a window just like the window discussed in the past few articles when discussing the GUI. After launching that window, a quick query to your favorite requests and sessions DMVs will reveal a little bit about this function that we are diving into today.

activestreamquery

If I click on the query text in that previous result set, I can see the following to be the query that is executing for the “live data” stream.

Cleaning it up a bit and giving some useful values to the parameters, I might have something like this:

Running that particular query from a management studio window would be rather pointless. It doesn’t ever return unless you cancel the query. The key to this one though is the second parameter. The second parameter tells us what kind of source we want to use for the stream of data. There are two values (that I have been able to find) that can be used: 0 and 1. A value of 0 pulls from the live stream target. A value of 1 pulls from the file target. If using a value of 1, then the first parameter needs to be adjusted accordingly as well. If the two parameters do not match, then an error is thrown.

As it happens, I have several log files in the default directory for the system_health and the HKEngine sessions that are deployed and running on my instance of SQL Server. Here is a sample of those files:

event_files

So far so good, right? Let’s go ahead and execute that second query (that reads from the file target) and see what happens:

stream_file1

This to me is far from useful as of yet. But there are a couple of rabbit holes to dig into from here. The first being the different types that we can see here. Let’s refashion the query to restrict the types being returned and see what happens:

stream_filesrabbit1

First, in blue we see that type 2 is exactly the same every single time. In my environment I have exactly four of that type. If I look on the file system, I can see that I have four files as well. This seems to indicate (not yet validated in my plucking at it) that this represents a file. Then type 1 is identical up until the point where I have it highlighted in orange. Just an interesting side note there is all.

If I perform the same thing for the HKEngine session, I see something like the following:

stream_filesrabbit2

Notice the difference here? Now I have two type 1 entries for each file that has been created. In addition, one of the type 1 entries is exactly the same for all files created.

But without being able to translate the data returned, this is really just a fun exercise into the live stream. That said, I give you one last interesting tidbit for today while leaving the second rabbit hole (translating the data into human readable text) for another time (besides just using the event file function to read it).

Recall from an earlier result that my spid for the live data viewer was spid 53. If run an output buffer for that spid, I will see something like the following:

outputbuffer

Not the friendliest of outputs to peruse and try to figure out, but it does give you a sense of what is going on there. If you know how to convert the data output from sys.fn_MSxe_read_event_stream, I would be interested in seeing what you do for it.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Hidden GUI Gems

frostedsnowflakeThe GUI for Extended Events offers some power to help you get started with trying to work with Extended Events. As I have shown over the past few articles, some of this power comes with a bit of cost and may in fact not be as powerful as just using a script. You can flip back through the previous articles via this link.

Today, I have a few gems that are available to you in the GUI. These gems should help solidify your understanding of the tools available through the GUI to help work with Extended Events. So far I haven’t hid my preference for using a script over the GUI. That said, the GUI can be useful for a thing or two. I will explain in better detail further into the article.

Playing in the Mud

To show these gems, a little setup is required first. So I am going to start with the following sample session (started from the GUI):

debugwhat

The setup thus far is rather simple, I have selected two optional settings – “start the event session immediately after session creation” and “track how events are related to one another.” The next thing to do within the GUI is to make my way through the events and select the events I need. The events I want to select need to help me track information related to the SQL process stopping (exiting) and any info related to when a memory dump is created.

To try and find the appropriate events, I will go to the Events tab, click the drop down shown in the green box and select “Event names and descriptions” from the menu.

filtering_eventselect

After making that selection, then I can type a keyword within the text box under “Event library”. Typing in the term “dump” will produce no results. This means either I have no events that will meet my requirements or I need to try a different term, right?

Well, just to verify the results I desire to confirm what I have seen by using a script. Executing the following script:

This confirms my suspicion. The script returns several results with the term “dump” when querying both the descriptions and event names just as I did with the GUI. There is an interesting development however. All of the results show as being in the debug channel.

filtering_eventselect_script

So why do these results not display from the GUI? Well that is one of the hidden gems. The debug channel is not shown in the GUI results by default. This will prevent any events from that channel from being displayed. This can be changed by checking the box displayed in the following image:

channelfilter

Consider here the name of the Channel for a moment. These events may be obfuscated from search results by default for a good reason. Some of these events can have a significant performance impact on the instance. That said, there is the occasional good reason to need to use the events from this channel in order to troubleshoot specific issues on the server. You just have to dig harder at it.

Having resolved that issue, I have resolved on using the stack_trace and sql_exit_invoked events. So I select both of the events and then move on to the Data Storage tab so I can configure the target. The target I have chosen to use is the event_file target. Now that I have selected the events and configured a target, I can slick the script button. I recommend always using the script button rather than committing the session direct from the GUI. Clicking the script button here will yield the next gem.

Xe_debugwhat

I have scripted the session I configured in the GUI. Overlaying the general page of the session with the produced script shows the gem. While the track_causality setting does get scripted, the option to start the session after creating the session does not properly script. This is an interesting problem. While this produces a small negative impact, it is one to bear in mind. If you need to have the session started, then make sure to manually start it or script the start of the session to confirm it has been done.

A third gem is one that I have not shown here but one to play with in your free time. If you change the name of the session and then script it, see what happens.

These are just a few more gems that I have shown in the GUI tools for Extended Events. I recommend using a script where possible (which is just about everywhere), and I have not hid that fact. While useful to a small degree, I do not recommend using the GUI for most tasks (again with a recommendation to use a script instead). Using a script has numerous benefits above and beyond the use of a GUI.

If for some reason you are not using Extended Events simply because you wanted to use a graphical interface, I recommend reconsidering. The series of posts I have on Extended Events has plenty of useful scripts that will provide the functionality and ease of use for Extended Events to mitigate that internal lust for the GUI.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Filtering Logged Data

grnsnowmnOver the course of the past couple of articles, I have shown some interesting if not useful functionality in the user interface for working with Extended Event log files. Those articles on merging files and customizing the view can be read here and here.

Today, I am going to explore how to further customize the view of the GUI for working with Extended Event Log files or XEL files. This further customization does include working with filters. Prior to proceeding, it would be beneficial to review the aforementioned articles – they do outline some building blocks for this article.

Sifting through it all

Even with a well tuned XE session, sometimes there is just a deluge of data. A deluge of data can make it more difficult from time to time to find exactly what is the most likely candidate to be the cause of the problem currently at hand (whatever that may be).

When dealing with a large quantity of data, it becomes necessary to filter that data down. Filtering it down will make it significantly easier to handle – when an appropriate filter is utilized. The GUI permits the implementation of filters in a couple of different ways to help with these dives into the XE logged data.

When convinced to use the GUI to peruse the log data, there are a few possibilities in how to create a filter (and yes it would be better to do it with a script). The first of these methods is actually quite simple. Let’s start with a grid display of the same session used in the previous couple of articles:

starter_grid

From here, if I right click one of the cells as shown below, I will be prompted with a context menu:

grid_filter

By right clicking a value in the grid, an option to “Filter by this value” will be given in the context menu. Selecting that option will open a new window allowing for further configuration of the filter or to just accept the filter as-is.

filter_window2

From the “Filters” window, you can see there is the option to set a time based filter, to create a filter on values for the fields within the trace log or to combine the two. This is pretty straight forward here. That said, recall that filters configured here can be saved in the viewsetting file discussed in the prior article (mentioned at the beginning of this article).

Further Analysis

After filtering down the results, what if you wish to now perform more complex analysis? Maybe the analysis is to be done for trending purposes, or maybe something else. Regardless of the purpose, the need is there and you wish to know how to do it (but for some reason have a serious allergen against doing the work via a script). Luckily there just happens to be a magic button within the GUI that helps to perform this task.

Within the GUI, we are given the opportunity to group and aggregate data. The aggregations cannot be performed without first performing a grouping (very similar to the requirements via script). The grouping can contain multiple fields or just a single field. Let’s take a quick peek.

groups_xe

Clicking on the Grouping button, the following window opens, permitting the configuration of the groups.

group_wind

This looks pretty familiar and standard. It is rather simple in design and function. Move columns from left to right to add to the grouping. To remove from the grouping, move from right to left.

aggregs_xe

With the group established, then comes the fun part to help with analysis. This is where aggregations comes into play. Within an aggregation; we can perform counts, max, min, avg and sum. Depending on the field, the aggregation that can be performed may be restricted. Clicking the aggregation button, one will see the following window:

agg_wind

Given the session being evaluated and the columns that have been chosen previously, I am restricted to the columns shown in the preceding image. Aggregations can only be performed on columns selected in the “Column Chooser”. The rest of the configuration of the aggregation is merely an exercise in plug and play to determine which fields or aggregation will serve your needs the best.

For this exercise and the chosen grouping, I am going to go with Average on the duration field and then sort the aggregation by that same field in a descending order. For your purposes, feel free to choose something more meaningful or appropriate to your needs.

Cautionary Tale

If there happens to be a rather large number of events within the session file(s) being evaluated, don’t expect this to work without a hitch or three. Trying to load a 1GB trace file (read that as a default size on an XE trace file) with over 600,000 events, SSMS regularly crashed on me while trying to apply a filter, a group, and even the aggregation. None of this has been an issue while trying to perform similar types of groupings and aggregations with large trace files via script.

If you are going to attempt to manipulate aggregations, groupings, filters etc. via the GUI with large trace files, understand that you may run into errors and crashes when using the graphical interface rather than a script. The choice is yours!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Customize the XEvent Log Display

lego_giftIn the previous article I showed a couple of methods to merge Extended Event Log files. In that article I mentioned the ability to customize the display in the GUI for the merged log files.

Today, I am going to explore how to configure the GUI with filters and different column sets or orders. To proceed, it will require a refresher course via the prior article. If you need a primer in how to merge the XEL files, or to open the XEL files from within the GUI, I recommend taking the time now to read that prior article.

Getting Sticky Again

This should be a rather quick foray into the world of the GUI and how to customize. Just bear in mind that it is a small building block.

Taking up from the prior article, I will continue to use the same sample session and sample data.

Let’s start with a screenshot where we left off the last time:

merge_gui_display

By itself, this is not the prettiest of layouts. Sure there is good information, but the layout is rather simple and leaves one wanting more. The “more” comes easily enough after some familiarity with the GUI. Suppose you desire to change the order of the columns presented in this view, or to even select different columns, there are two methods to achieve that goal. The first option would be to do as shown here:

method1_xe_gui_options

By right clicking an empty space in the title bar, a context menu will be made available. There are several options here that can be a bit fun to peruse. I will just focus on the “Choose Columns” option for now.

The alternative for this method is to look in the toolbar. There will be an XE specific toolbar available when looking at an XE log file. In this toolbar there is a “Choose Columns” button. This method will get us to the same end result as the first.

method2_xe_gui_options

Selecting the “Choose Columns” from either method just presented will open a new window such as the following:

choosecolumns

This kind of screen should be fairly familiar and easily discernible. One the left is a set of available columns that can be added in the display. On the right is the list of columns currently being displayed in tabular format. In between there is a set of buttons to add, remove, add all, or remove all columns. In the top right is a means of re-ordering the columns as presented in the display table. Pick a few and sort the columns to a suitable display mode and you are all set.

Recall from the image showing method one that there is an option to remove the column. This same function can be performed from the context menu or from this “Choose Columns” window.

For the order of the columns, there is another fancy means to do that as well other than through the “Choose Columns” window. Columns can be dragged and dropped from the initial screen to change the presentation order easily enough. Sometimes it is far easier to use the “Choose Columns” window to do that. Pick your poison here and run with it.

Super Stickey

This is fantastic if the manipulation of the display settings is a one-off adventure. What if you need to do this multiple times? Maybe, just maybe, this needs to be done for 100 servers. The process I have just shown can become exceedingly tedious. In addition, every time an XEL file is opened, it defaults back to the same settings I have already shown. What do you do in cases like that?

Beyond using a script (the recommended method by the way), there is an alternative.

displaysett_xe_gui_options

The alternative is found in the toolbar discussed earlier in this article. The option is “Display Settings.” After picking the columns and the order and getting the display dialed in just right, the view can be saved. The view settings will be saved as an XML file with the extension of viewsetting. This view can then be used to immediately apply to a freshly opened log file. Sure it is still an extra step or two to open that view and can still become a bit tedious, but it is far better than resetting the view every time. And a bonus is that the saved viewsetting file will also save merged columns, groupings, aggregations, column order, and filters defined in the view.

Then again, all of this is easily achieved through a tsql script which is far more scalable when dealing with multiple servers or the need to review the data more than once. The choice is yours when using XE.

Though I mentioned the creation of a filter for the displayed log data, I will not be delving into that topic today. It would be a very suitable exercise for the reader to figure out how to apply a filter through this tool. Or wait until the next time when that topic will be covered along with aggregations and groups. In the meantime, I recommend trying to figure it out with a little picking and plucking through the GUI-ness.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

Log Files from Different Sources

Well, it has been quite a pause between articles in the series. It is well past time to continue the series. With such a delay in the series, it is prudent to remind all that there is a series on Extended Events I had done over the course of two months. The complete table of contents for this series can be found here. yule logTruth be told, I had intended to start this series again as my usual 12 days of Christmas series. Sadly, I am a bit behind on that series. Maybe some of there will be multiple posts a day to help catch up.

As was the case throughout the course of the series, I intend over the next few articles to continue to produce building blocks that may be helpful for later articles, or concepts. With that in mind, the chosen topic today is very basic in nature. While basic, it is a rather important topic.

While working with Extended Events, or with any sort of logging or tracing utility, it becomes critical to consume the data from disparate sources or even to consume the data from a source server on a different server. The question becomes: How to consume that data?

Getting Sticky

When it comes to working with log files from different sources, or most tasks within SQL Server for that matter, there are two general methods. The first method is of the type frequently frowned upon by the more senior of database professionals, sysadmins and so forth. This first method is to use the graphical user interface or GUI. This is more of a point and click method. This is also a method that I will generally shy away from for most tasks. That said, this method is easily performed through the GUI and is reasonably safe.

The first step is illustrated with the following image:

XE_merge

From within Management Studio, navigate the file menu as shown in the preceding image. After clicking on “Merge Extended Event Files”, a dialog will open as shown (in the background) of the next image:

XE_merge_opendialog

Clicking add from the “Merge Extended Event Files” window will open a dialog that permits you to navigate to the location of the XEL files that need to be opened and merged. Once located, a single file or multiple files can be selected. One should proceed carefully here to ensure against opening too many files or files that may be too large. Opening too many files or files that are too large can cause various unwanted effects.

Once the file(s) is(are) open, a grid like window will be open in SSMS that looks like the following:

merge_gui_display

From here, one can manipulate the display to something more suitable to his/her display preferences (e.g. filtering or column layout). I will not cover that here but will be saving it (the display customization tutorial) for another time. That aside, I have just shown how easy it is to look into an Extended Event Log file through the use of the GUI.

Not So Sticky

For the less sticky  (less GUI) method, I have shown a similar routine in the past that lays the groundwork for how to do this via script. You can read a little about that here. Building on that method, there is a tweak to be made – we must specifically name the path to the XEL files to be merged rather than try to build it dynamically.

Using the same session files as shown in the GUI examples, I will show how to proceed with the merge via script.

This script lays out a rather simple process. That process being to load the log data into a table (in a database of your choosing) and then once loaded, query the data to better understand what has been captured. In the preceding script I left a few notes. It is important to understand that if trying to pull in the logs from a SQL 2008 or R2 instance then the XEM file must be included (as I have done for this particular example).

To parse the data into a friendly format that I can use for analysis, I would do something like the following:

From here it is really easy to add/remove columns or re-order the columns into a more friendly format for the person reviewing the data. This is incredibly easy – especially given the appropriate recipe / script / process. I would also venture that this method will require fewer resources and lead to less chance of error.

Enjoy working with evaluating and merging these XEL files from different sources. It should be fun!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

 

«page 1 of 23






Calendar
May 2016
M T W T F S S
« Apr    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, May 1, 2016