Finding Installed Event Sessions

As a DBA, it is not uncommon to feel like you are marooned off on some tiny little island trying to figure things out for yourself. It may even be very common to feel the icy pain of the development cold shoulder (intentional or not) as you go about doing your duties to protect the data and secure high octane performance out of the database.

Not only may it be very commonplace to experience this due to the nature of the job, but it can also get to feel that way when a new product is released. Suddenly, that icy island is further north into the Arctic Ocean and you are struggling to gasp for a bit of air in the frigid cold.

I am certain this may be some of what you feel as you try to take a look at Extended Events. It is a fancy, new technology and there is not a ton of time to get up to speed while trying to warm yourself against the development chill. As you stew about the situation, you begin to ponder, “How am I even supposed to know what an XE is, let alone what XEs are running on my server?” Never mind the confusion on what an XE is and that it is actually running or deployed sessions on the server. Don’t worry about the terminology just yet. What can we do to figure out what might be out there?

Well, that is where the trusty Google has come to help you. Thanks to a better than adequate ability to search the Internet, you landed on this page that has a script to help you explore what might be running as far as Extended Events on your server. You may be surprised to learn that you have more Extended Events Sessions running than you fathomed. Yes, SQL Server comes with some default sessions. Yes, some of these default sessions are more public than others. And yes, some of these

Come on in out of the cold. Get off that arctic island and warm up with a little XE fun. This script is quite simple but will show you what you may have out there on your servers. Once you see what is there, hopefully your curiosity will get a little piqued and you will want to learn a little more (there is more of that here on this site too).

 

sessions are “hidden” from you. That is, until now.

And now, for that script:

Quick and simple, and full of that warming XE sensation. This script just shows you what is a private session versus public. What is a private session, you ask? Well, that happens to be all of those database audits you have deployed to your server. It also includes those little things like the “sp_server_diagnostics session” session (yes it sounds redundant but that is because session is also part of the name for that session).

This script will also show you what is deployed to the server versus what is running on the server (not all sessions have to be running). And to top it all off, I set it up so you can query for a specific session or for all sessions.

Try it out on one or more of your servers and let me know how it goes.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the seventh article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

SQLs Black Box Recorder – sp_server_diagnostics

Yesterday i shared the first article in a three part series about the SQL Black Box Recorder, that you can read here, with a promise of at least two more posts. The first article in the series laid the groundwork about the black box recorder in SQL Server. Furthermore, there was a brief introduction into the three components of the black box recorder. The three components as laid out in that article are:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

The first article went into deeper detail about the first leg of this three-legged black box recorder – the default trace. In this article, the focus will be directed to the next leg of the black box recorder – or the system_health Extended Event Session. If by some measure you are in the dark about what Extended Events is, then I recommend you read my personal blog series on the subject. There are numerous articles stepping through the subject in easy-to-digest fashion. You can find many of the articles in the following table of contents – here.

The focus of this article will take on the topic of the third piece of the black box recorder, sp_server_diagnostics. I will show how this procedure is critical in gathering pertinent information that is crucial to gathering an overall picture of the system—from a default black box recorder perspective.

SQL Black Box Recorder

What is the sp_server_diagnostics process?

Beyond being a component of the black box for SQL Server, what exactly is this diagnostics process? The sp_server_diagnostics is much as the name implies—it is a “diagnostics” service that attempts to gather information about various events that may affect the overall health of the SQL Server instance.

The diagnostics process will trap various server related health (diagnostics) information related to the SQL Server instance in an effort to try and detect potential failures and errors. This diagnostics session/process traps information for five different categories by default. There is a sixth category of information for those special servers that happen to be running an Availability Group.

The Six

  • system: overall health of the server including information about spinlocks, severe processing conditions, non-yielding tasks, page faults, and CPU usage.
  • resource: health of resources, specifically including physical and virtual memory, buffer pools, pages, cache and other memory objects
  • query_processing: captures query processing data about worker threads, tasks, wait types, CPU intensive sessions, and blocking tasks.
  • io_subsystem: health related data to IO.
  • events: errors and events of interest recorded by the server, including details about ring buffer exceptions, ring buffer events about memory broker, out of memory, scheduler monitor, buffer pool, spinlocks, security, and connectivity.
  • AvailabilityGroup: basic information about the AvailabilityGroup if there is an AvailabilityGroup enabled on the server.

This is fantastic introductory information. As luck would have it, you can also read more about it from my blog – here.

How to use this process?

What good is any level of information if we don’t quite know how to use or consume that data? Do we consume it via trace? Do we execute the stored procedure? Do we capture the information from some other means? The answer is yes! Wait, yes to what? Yes to all of it! Let’s explore how to consume this data.

Option 1

The first step to learning how to consume this data is to start at probably the most basic of levels. As you have already noted, this is a stored procedure. It was probably the sp_ in the name that gave it away, right? Since it comes in the form of a stored procedure, it makes it pretty easy to figure out the first means to get the data from it—just execute it. Actually, let me clarify that just a bit. sp_server_diagnostics is a CLR stored procedure. Nonetheless, all that needs be done is execute it at this level to get to the data.

Look at that! Pretty slick! I execute the stored procedure and I can see a point in time result of the “health check” from this component of the black box recorder. Notice here that I only have five of the six categories that I had mentioned. This is due to the lack of a configured availability group (AlwaysOn) from this particular instance. Another point of interest is that the data column is not in a very friendly format for DBA consumption. This is flat text that just goes on and on. It is painful to consume in this manner.

Tidy that Code

Looking at the code, you can see quickly the difference. I have changed the data column to be XML-based to better match the data being output from the stored procedure. Let’s see that in action:

That is much better. Now I can simply click one of the data results and see a nicely formatted easy to read XML output of that component. Here is an example from the query_processing component:

Option 2

Cool! I have just shown one means of exploring this facet of the black box recorder. Now, let’s take a look at other means to get at this particular data, beyond just executing the stored procedure. Did you know that this procedure is very closely related to Extended Events? (I am sure that you picked up on that nugget due to reading the previous article in the series or possibly the other article I mentioned here). With such a close relationship to Extended Events, it makes sense that sp_server_diagnostics also has a couple of events and is consumed by multiple sessions.

SQL Server comes with various default Extended Event sessions, of those sp_server_diagnostics is consumed by the system_health session as well as the hidden session for SQLDiag that is used by Windows Clustering and Availability Groups to determine server health. You already heard about the system_health (from the previous article), but few have heard about the hidden session. I will take a look at the events as well as the sessions that consume the sp_server_diagnostics data.

Events

First, I will share the details for each of the events related to the sp_server_diagnostics process.

As I have shown here, you can see that these events are a part of the debug channel. In the next couple of images, I want to show some of the payload attached to these events.

These events show what I have shown previously with regards to output of the stored procedure. There is a consistency in the output of the diagnostics process. What I have yet to show in this data is one more piece of information that could be helpful down the road and that pertains specifically to the extended events data. That piece of information I need to share is the keyword associated with each of these events. When looking for related events, I need you to understand that the sp_server_diagnostics events are a part of the scheduling classification. There, I have said it. I just gave you a big clue to help you in your troubleshooting endeavors and the use of the diagnostics processes/events.

Which Sessions then?

Now that I have shown which events are related to diagnostics, I can finally show which sessions are dependent upon the diagnostics data. To show that kind of information, I have this next query to help:

When I execute the statements in the preceding query, I will receive a single result—system_health. That said, do not believe everything I just showed you with that query. SQL Server does a decent job at hiding some of the details. When I run the next query, I will see a little bit more information.

When I run the preceding query, I will see the following two sessions (given that I have not change the default settings):

Take note of the names of those two sessions. I showed that the system_health session is reliant upon the diagnostics events. Now, I am showing that there is another event session called “sp_server_diagnostics session“. Wait, there is more! SQL Server continues to hide stuff from us.

Hidden Sessions

When you have AlwaysOn configured, there is an additional hidden session that is not visible within the sessions lists inside of SQL Server. To view this hidden session, you must follow the directions shown in this article on msdn. Once you have navigated to the folder containing the hidden XEL files, you can open them in Management Studio and view the data. It is while viewing the data that you will be able to see the relationship between sp_server_diagnostics and this newly discovered trace file.

The payload from this image demonstrates the relationship between sp_server_diagnostics and the hidden trace file. I challenge you to take some time to explore this hidden trace file on one of your servers where AlwaysOn is configured. I know you will be thankful for the investment at some point down the road.

Conclusion

I have just shown a ton of info about the third component of the black box recorder—sp_server_diagnostics. This procedure may be lesser known than the first two components, but it is highly integral into the overall monitoring and health of your SQL Server instance. In this article, I have not only shown how to use sp_server_diagnostics (albeit quickly), but I have also shown that the diagnostics are heavily relied upon by other components of the black box recorder. In addition, I have shown how this component is integrated into other hidden traces that are essential in the monitoring and troubleshooting of your instance.

The black box recorder is real. You need to invest time in getting to know the black box recorder components and how they can be of help to you in becoming a superhero DBA.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the sixth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

SQL Servers Black Box Recorder – system_health

Yesterday i shared the first article in a three part series about the SQL Black Box Recorder, that you can read here, with a promise of at least two more posts. The first article in the series laid the groundwork about the black box recorder in SQL Server. Furthermore, there was a brief introduction into the three components of the black box recorder. The three components as laid out in that article are:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

The first article went into deeper detail about the first leg of this three-legged black box recorder – the default trace. In this article, the focus will be directed to the next leg of the black box recorder – or the system_health Extended Event Session. If by some measure you are in the dark about what Extended Events is, then I recommend you read my personal blog series on the subject. There are numerous articles stepping through the subject in easy-to-digest fashion. You can find many of the articles in the following table of contents – here.

The focus of this article will not be to introduce the topic of Extended Events. Rather, it will be to introduce the system_health session and dive into it a fair bit.

SQL Black Box Recorder

What is the system_health session?

Beyond being a component of the black box for SQL Server, what exactly is this event session? The system_health is much as the name implies – it is a “trace” that attempts to gather information about various events that may affect the overall health of the SQL Server instance.

The event session will trap various events related to deadlocks, waits, clr, memory, schedulers, and reported errors. To get a better grasp of this, let’s take a look at the event session makeup based on the available metadata in the dmvs and catalog views.

In addition to the types of events I mentioned, there are also a few more interesting things to discover from the results of this particular query. For instance, the actions that are being employed help to gather even more information such as the callstack, tsql_callstack, database id, and sql text of the query that was executing at the moment of the event that is trapped.

One very peculiar event that is trapped is the sp_server_diagnostics_component_result event. Remember from earlier in this article that I noted the third leg of the black box record happens to be a very similarly named session – “sp_server_diagnostics procedure”. When I cover this third leg, it will make a little more sense. Understand, for now, that your suspicion of it being a bit more integral to the black box (than merely 1/3 of the overall black box) is confirmed.

This is a good set of information that can be trapped. Furthermore, this is a good set of information which can give a fairly good diagnosis of several different potential problems within your SQL Server Instance! (Yes it may seem repetitive, but I wanted to underscore the importance of the data that can be trapped.) That said, compare the types of events to those of the default trace. Take note of how many events actually overlap between the two components of the black box recorder.

For ease of comparison, let’s simplify that previous query to get a set of events to read without the duplication due to the actions on each event.

And since I am feeling particularly generous, I have lumped the events from the two traces together in the following query:

That should help to compare the 17 events from the system_health session and the 34 from the default trace with a little more ease and speed. The events from the two sessions are rather complimentary to each as they help to construct the SQL Server Black Box recorder.

How to use this session?

With the events trapped by this session, there are a few ways to use the trapped data that should stand out pretty easily. To me, the quick application of the data from this session can come from the review of deadlocks, reported errors, or even waits analysis. Using the deadlock as probably the most common problem people will want to troubleshoot, this information is suddenly much more valuable and easy to access (no more need for that trace flag). Since the system_health is now trapping the deadlock by default, one could quickly access the deadlock graph from the event data and proceed to troubleshooting. More information on getting that deadlock graph can be found here and here.

Learn to use the data made available through this member of the black box recorder. Becoming familiar with this tool (and additionally with Extended Events) will make you a rock-star in the office. The information needed to troubleshoot most problems is right at your fingertips through the use of the different pieces of the black box recorder. If it is not available there, then more than likely it can be made available through a bit of tinkering with Extended Events. Get to know the tools and be that rock-star DBA.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the fifth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

SQL Servers Black Box Recorder – Def Trace

Across many professions and industries there is often a need to have some sort of device that “audits” everything that happens with a device or process. We are probably all quite familiar with the infamous black box recorders used by the passenger airline and train industries. It is also quite possibly fairly common knowledge that many pace makers perform the same sort of functionality – on a smaller scale. Various metrics and data points are registered and recorded somewhere. Whether to a local storage device or whether the device phones home to some other remote location, it is recorded.

We often pontificate about the what-ifs for a black box recorder within SQL Server, right? We wish and dream and go about creating our own little recorder to suit the requirements we deem necessary for successful monitoring of the server for the just-in-case scenario. Well, the truth of the matter is that we really don’t need to go to such great lengths to create a “black box recorder” because Microsoft has already done all of that for us.

Wait, what? Yes, that’s right! Truth be told this shouldn’t be much news for most Senior Level data professionals working with SQL Server for the past few years. But if you are new to the product, this might be good news for you. Then again, it might just be bad news depending on your take.

Very much like what you may find with the airline industry, the black box recorder in SQL Server is not just a single method (device) implemented to capture all of the desired data points. On a passenger jet, you may find that there are three or more such devices that contribute to the capture and recording of the in-flight data. In SQL Server, there are three major processes that help capture our in-flight data. Over the next few articles I will discuss each of these processes. These processes include:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

This multi-tiered approach does provide a sort of fail-safe system. If one should be disabled, there still remain up to two more processes that may be running. That said, each of these can be disabled but it does take a bit of an overt effort. And since it does require somebody to put out effort to try and disable each of the black box components, you could potentially capture the culprit via audits from the other components or via an explicit audit that you create for these types of purposes.

Since this will be something to discuss over a few articles, I will break out each process into an individual article. For today, I will discuss the default trace.

Default Trace

The default trace by itself is something that can be turned off via configuration option. There may be good reason to disable the default trace. Before disabling the default trace, please consider the following that can be captured via the default trace. I will use a query to demonstrate the events and categories that are configured for capture in the default trace.

In this query, I have requested a few more data points than necessary to illustrate the point. That is to help illustrate an additional point that the default trace isn’t filtering out any data for these events. If the event fires (in this trace), it is recorded. Let’s divert right back to the events and categories for now. The results of that query will produce the following sample list of events for me on SQL Server 2014:

That is quite a range of events covered by this particular recording device. From changes to objects down to security related events and even errors and warnings. If somebody drops an object, the default trace can catch it. If one of the various DBCC statements is executed, it will be trapped in this trace.

There is one thing that is captured by this trace that is not overly obvious. In fact, it is rather annoying in how it is recorded in my opinion. Server configuration settings such as “cost threshold of parallelism” or the lesser utilized “user options” are not registered as an “Object:Alter” event but rather as an ErrorLog event. I wouldn’t call this type of “change” an error and don’t necessarily like seeing these registered with other legitimate errors. That said, it would be nice to see these logged differently (they are currently logged the same way with Extended Events). So, if somebody is dorking around with server configurations, it becomes a bit more convoluted to figure it out, but we can get there. Let’s see how that works:

Looking at my system, I get the following sample results:

You can see there are plenty of settings that I have been playing with captured by the default trace. If you happen to have a junior DBA or maybe a cowboy DBA that loves to shoot from the hip and make changes, this will help you find all the necessary details to bring to that DBA for discussion.

The default trace comprises one third of the black box recorder. I have shown a quick use for the default trace and have also shown some of the various events that are captured from this trace. I recommend getting to know your default trace just a little bit better. You never know when you may need to resort to the flight recorder data held in the default trace. Being familiar with the default trace before you need to use it will help improve your comfort level when under the stress of trying to figure out what happened just before the server went belly up!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the fourth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How to: File Target use in Extended Events

I have been doing a lot of posts of late about extended events but have failed to share some of the essential building blocks. I am not looking to do a series of definitions, but just share a couple of quick tips and examples here and there that will make the use of extended events a little easier to undertake.

The first one to tackle is the use of the file target. Using a file target is not difficult from a creation point of view (event session creation). But, some basics are critical or you could end up with fewer follicles.

Let’s take a quick peek at some of these tips.

File System Prep

This first little tip comes from a painful experience. It is common sense to only try and create files in a directory that exists, but sometimes that directory has to be different on different systems. Then comes a little copy and paste of the last code used that worked. You think you are golden but forgot that one little tweak for the directory to be used. Oops.

Take this example session. When you run this code, you will not see an error (don’t change anything).

However, if you try to start the session, you will receive a pretty little error. Here is the code to try and start the session, along with it’s associated error.

Msg 25602, Level 17, State 22, Line 25
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 3: ‘The system cannot find the path specified.
‘ while creating the file ‘C:\Database\XEs\FileTargetDemo_0_130670484928250000.xel’.

If you double check in the SSMS GUI, you will see that the session is there. It is just stopped. You can also check that the session exists if you recall my query from a previous post where I show the status of the session.

If you take it a step further and try to read the log file that should be created, you will be greeted once more with a reminder that there is a problem. Try this (sans change again).

And you will receive the following message:

Msg 25718, Level 16, State 3, Line 31
The log file name “C:\Database\XEs\FileTargetDemo*.xel” is invalid. Verify that the file exists and that the SQL Server service account has access to it.

The beauty here is that all is not lost. These are easy problems to get around and resolve. All it takes is to adjust the file path to be the correct path or a path that exists (to which you have permissions).

Read the Log File

If you have been reading this article, then this might seem like a bit of redundancy. That is by design. DBAs love redundancy—well at least when it means there is a means to recovery due to the redundancy.

Unlike reading from the ring_buffer, when you read from the file target, you will use different code. The file target is stored in a binary representation on disk that a function will produce as XML format for human readability. So to read a file target, you would need to do something like the following.

And you definitely do not want to try the following with a file target. That is unless of course you are looking to go bald at an earlier than expected age in this profession.

If you find yourself doing this version, all you will get for results is a short xml string similar to the following:

This is not what you would be looking for when trying to read the event session data. That said, it at least provides some information that could be of use. You will see that buffers are tracked and the file name is tracked. For this last demo, I corrected the FileTargetDemo session to use a directory on the file system that existed and to which I had the appropriate level of permissions.

I just covered two quick tips to help make life a little bit easier when dealing with file targets and extended event sessions. I will try to continue little tips like this over the coming months. I hope you have found this useful and that you will indeed be able to put it to use.

 

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in an article demonstrating the power of XE over Profiler? Check this one out!

This has been the third article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

Upgrading From SQL Server Profiler

I just shared an article describing how to use Extended Events to perform your SQL Server Profiler duties. If not, you can read all about it here.

In that article, I showed one common use-case for SQL Server Profiler and how to achieve the same result via Extended Events. What I didn’t show in that article was how to correlate all of your favorite trace/profiler options into the shiny new XE equivalents. Is there even a way to do that?

As luck would have it, there is a means to correlate trace/profiler events to Extended Event events (yeah that sounds weird).  With the release of SQL Server 2012, Microsoft introduced a couple of catalog views to help with this correlation effort. Those views have a pretty straightforward naming convention. For example, one of the views is named sys.trace_xe_event_map while the other is sys.trace_xe_action_map. For this article, I will be focusing on the former.

Making the Upgrade

When looking to finally make the upgrade away from trace/profiler, a big key is to figure out if your favorite trace events are even an option at the next level. I have talked about finding events in XE in previous articles by trying to search for a specific topic that might apply to your current condition.

But if you are already familiar with specific trace events, you may just want/need to know what the new name is in XE. This is where those catalog views come into play. And in support of that, here is a query that can help in that upgrade effort:

Looking through the results, one would notice that not every trace event maps to an Extended Event event. The events that don’t map, in general, deal with audits (which is actually driven by Extended Events).

This is a really good start to getting you on your way to that much needed upgrade away from profiler/trace. What if there are several traces that are already in use or scripted for various reasons in the environment?

Decision Time

If you happen to have traces that are already deployed in the environment or in script form, it can be a bit of a tedious pain to convert those to Extended Events. Do you manually recreate the traces as XE sessions? Do you abandon the upgrade due to the effort and annoyance it will create? Or do you find some automated means of performing the conversion?

Of those three options, only two are valid options. Those options involve performing the conversion of the traces to XE sessions. There are pros and cons for each. You may opt to take your time and learn more about Extended Events by performing the manual upgrade, or you may choose to save time by using an automated routine.

Should you decide to try the automated routine, there is one already out and available to help you on your way. Jonathan Kehayias wrote the script and you can download it here.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This has been the second article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How To: XEvents as Profiler

A common excuse for not delving into extended events seems to be the jump from Profiler to extended events.  There appears to be an inherent fear with how to use extended events.  In addition to that fear, is the fear that extended events does not do what Profiler can do so easily.

Today, I would like to share a short tutorial on how to use extended events as if it were a profiling session.  I am only going to show one of the many possible means to profile via extended events.  Just understand that this means you have flexibility and a way to do more with this simple example than what I am going to share.

Requirement

You need to capture TSQL queries that are occurring against your instance in order to try and determine the code that is being passed to SQL Server from the application.

How-To

Rather than jump to Profiler, you are going to need to jump to a fresh, clean query window in SSMS.  Just like in Profiler, you will want to capture certain “statement” oriented events.  For this session, I want to start with sp_statement_starting and sql_statement_starting.  I am not going to use the _completed forms of those events because I want to capture as much as I can – without capturing too much noise.  Sometimes, there may be a _starting without a coordinated _completed and that could throw a wrench in the works.

With the basic information in hand, we are ready to implement some of the things learned in previous XEvents posts (here and here).  Let’s go ahead and start setting up the session that will capture the information we seek.

One caveat to consider is in the sp_statement_starting event, I have specified a specific set option be used.

This particular option is disabled by default.  If you want to know the object_name of code that is being used (triggers, stored procedures, etc), then this option must be enabled.

I have specified the same actions to be used for each of the events I am trapping, but this is purely up to you.  If you do not wish to capture these actions, then you can remove them or add different actions as necessary.  I have also specified a different predicate for each of the events.  This is something that works so much better in XEvents than Profiler – the filtering that can be applied is much more flexible.

Once the session is started, then it is merely a matter of figuring out how to look at the data.  I generally use TSQL to parse the data from the file, but you could also fiddle with the GUI as well.  You can access that by right-clicking the event session in management studio and then selecting “Watch Live Data”.  If everything is configured properly, then you will start to see data after the first event is triggered.  From within the GUI, you can pick and choose which columns to display in your view.  You can pause the display of the session data, or you can even “erase” the data from the current view (just like in Profiler).  You can even filter, aggregate or group the data in the view from the GUI.

The use of the GUI is up to you.  I still prefer to use the script route.  With that route, here is a sample of what you may need to write in order to display the data from the session data file that has been captured.

It really is that simple.  Now you have a means to run a Profiler-like trace on your server without the impact of Profiler.  XEvents provides the means to run a more evolved Profiler session on your data to capture things like queries from an application.  XEvents does all of this without the severe penalty of Profiler and with many additional bonuses.  Try it out and enjoy!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This is the first article in the 2018 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Quick and Easy XE for Azure DB

The Cloud

It has been a minute since I gave much love or attention to Extended Events in Azure SQL DB. Things have changed a touch since then. We will see about some of those changes in a future article. As for this time, let’s focus on how to get a session in Azure SQL DB up and running really quick and easy.

Create a Session

I am going to keep this as easy as possible for creating a session and thus I will demonstrate how to do it from the GUI. And for those more in tune with their scripting side, there is a future article on that as well as a glimpse of a script near the end of this article.

Recall from the previous article that things in Azure SQL DB are different for Extended Events. XE is database scoped rather than server scoped (as is the case with your on-premises servers). Due to this change, finding the GUI for XE is a little different.

In order to find the GUI for XE, you must drill down into the database and then you will see “Extended Events.” (Side note, there is no XE Profiler for Azure SQL DB as of this writing.) If you right-click Sessions, you will get a menu with the option to create a New Session.

After clicking “New Session…” the familiar window for a new session will appear.

Here, you see a session that I started creating already. For this session, I opted to go with the template called “Query Detail Tracking” (more on templates for Azure SQL DB in a future article). I also like to enable “Causality tracking” so that checkbox is ticked in this example. The template has the events I wish to capture already and I am only going to use the ring buffer (file target has some additional requirements for an Azure SQL DB session and should be treated in an entirely separate article) so I will skip those screens straight to the advanced screen before completing this session.

From the advanced screen, all I want to do for now is to decrease the dispatch latency from the default 30 seconds down to 10 seconds. For this session, I just want to capture the events quickly in the target and be able to review them as soon as possible. I won’t be able to utilize the “Watch Live data” option for this session (as shown below), so a quick dispatch is essential.

After I have selected all of the configurations I desire for this session, then I click the script button at the top of the window. From there, I will select “New Query Editor Window.” Once scripted, I will receive the following script for this particular session.

Notice that there is nothing about starting this session in the script to start the session.

Note that “on database” has been specified instead of the traditional “on server” for the session state command. In order to stop the session, it is just as easy as follows.

Conclusion

This has been a very simple introduction into the creation of an Extended Event session using a template for Azure SQL DB. I demonstrated the use of the GUI to configure the session quickly and then to subsequently script that configuration before creating the session.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This is a quick pre-cursor to the 2018 “12 Days of Christmas” series. The series will begin on December 25th (the first day of Christmas) and you can easily follow it via this page.

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

Encrypting a Database Never Finishes

Categories: News, Professional, Security, SSC
Comments: No Comments
Published on: December 20, 2018

There is plenty of legislation and regulation in place these days that strongly suggest the encryption of data within a database.  In SQL Server, we have the ability to comply with these regulations in a couple of different ways.  We can choose to encrypt at the column level or we can choose to encrypt the entire database using TDE (transparent data encryption).

Given these opportunities and methods of encrypting a database and protecting data, some clients will want to comply with the regulation and move forward with encryption.  Every now and again, the client might run into something they can enable themselves.  TDE happens to be one of those things.  Enabling TDE is pretty easy.  Sometimes things run very smoothly and sometimes there is a problem.  Unfortunately, the problem may not always be immediately evident.

This is one of those times.  If you enable a setting in a database, you would expect it to be in effect and to be working.  Sometimes, with TDE, that change doesn’t take effect. What actually happens is the switch is thrown, the database tries to comply, but something prevents the database from completing the directive it was just given.  As luck would have it, I had the opportunity to assist recently with such a case.

Infinitely Encrypting a Database

Before diving too far down into the problem, let’s take a look at how we can gauge and measure the encryption progress.  Microsoft has provided a dynamic management view to see and track the status and progress of encryption that we have tried to enable.  This view is sys.dm_database_encryption_keys.  Within that view there are a few important and useful fields.  One field is the encryption thumbprint, another is the encryption state, and another tells us the percent that has been completed.

With a little tsql from the toolbelt, we can pull back the essential information to monitor the encryption progress.

You’ll notice that I have a case statement in there to transform the encryption_state values into a human friendly form.  Sadly, the human friendly values are not stored in the database and are only documented in MSDN / BOL.

Now on to the problem at hand—the encryption that starts but never ends.  Given that I have a database named “Published”, I can enable encryption with the following script.

After issuing the encryption command, I can then pull up my “status” script and see how things are rolling.

In this example, and for the sake of this article, this database will remain at 0% complete for days and months if allowed to remain that way.  Notice that it still thinks it is progressing, but nothing is actually getting done.  This kind of result can also be seen after a database reaches 99% complete.  At some point, something occurred that caused grief for the encrypting process.  As soon as that happens, the report will revert back to 0% complete and just stall indefinitely.  The types of things that can cause this vary from corruption to index rebuilds occurring at the same time as the attempt at encryption.

When a stall such as this occurs, the fix could be as simple as restarting the encryption with the statement previously used.  Here that is again (bearing in mind to substitute the appropriate database name for the one I used in this demo):

In some cases, that won’t work.  And sometimes you just may want to proceed very cautiously.  In those events, you can get your database back on track to being encrypted by using the following:

The use of this trace flag forces the page scan, used to encrypt the database, to stop (even though it already appears to have stopped).

If I check my encryption status again, I should see the percentages growing until 100% is achieved and eventually the state that declares the database as being encrypted.  This is demonstrated in the following three screenshots.

And finally we have an encrypted database.

Conclusion

Transparent Data Encryption is useful in helping get a database into compliance with several regulations.  It is also fairly easy to implement.  Occasionally, it may only appear to have completed successfully.  When the SQL statement completes, TDE is not entirely in place.  Though it is “enabled”, it may still be waiting for the encryption phase to complete.  Once the statement to encrypt the database has been issued, it is important to follow up with a bit of monitoring to ensure the database progresses to the encrypted state.  If that doesn’t happen, you could end up in a state where you think the database is encrypted and it actually just stalled.

If you enjoy reading about security and encryption, check out these articles related to security and audits.

«page 1 of 2

Calendar
December 2018
M T W T F S S
« Nov   Jan »
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Welcome , today is Friday, July 19, 2019