SQLs Black Box Recorder – sp_server_diagnostics

Yesterday i shared the first article in a three part series about the SQL Black Box Recorder, that you can read here, with a promise of at least two more posts. The first article in the series laid the groundwork about the black box recorder in SQL Server. Furthermore, there was a brief introduction into the three components of the black box recorder. The three components as laid out in that article are:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

The first article went into deeper detail about the first leg of this three-legged black box recorder – the default trace. In this article, the focus will be directed to the next leg of the black box recorder – or the system_health Extended Event Session. If by some measure you are in the dark about what Extended Events is, then I recommend you read my personal blog series on the subject. There are numerous articles stepping through the subject in easy-to-digest fashion. You can find many of the articles in the following table of contents – here.

The focus of this article will take on the topic of the third piece of the black box recorder, sp_server_diagnostics. I will show how this procedure is critical in gathering pertinent information that is crucial to gathering an overall picture of the system—from a default black box recorder perspective.

SQL Black Box Recorder

What is the sp_server_diagnostics process?

Beyond being a component of the black box for SQL Server, what exactly is this diagnostics process? The sp_server_diagnostics is much as the name implies—it is a “diagnostics” service that attempts to gather information about various events that may affect the overall health of the SQL Server instance.

The diagnostics process will trap various server related health (diagnostics) information related to the SQL Server instance in an effort to try and detect potential failures and errors. This diagnostics session/process traps information for five different categories by default. There is a sixth category of information for those special servers that happen to be running an Availability Group.

The Six

  • system: overall health of the server including information about spinlocks, severe processing conditions, non-yielding tasks, page faults, and CPU usage.
  • resource: health of resources, specifically including physical and virtual memory, buffer pools, pages, cache and other memory objects
  • query_processing: captures query processing data about worker threads, tasks, wait types, CPU intensive sessions, and blocking tasks.
  • io_subsystem: health related data to IO.
  • events: errors and events of interest recorded by the server, including details about ring buffer exceptions, ring buffer events about memory broker, out of memory, scheduler monitor, buffer pool, spinlocks, security, and connectivity.
  • AvailabilityGroup: basic information about the AvailabilityGroup if there is an AvailabilityGroup enabled on the server.

This is fantastic introductory information. As luck would have it, you can also read more about it from my blog – here.

How to use this process?

What good is any level of information if we don’t quite know how to use or consume that data? Do we consume it via trace? Do we execute the stored procedure? Do we capture the information from some other means? The answer is yes! Wait, yes to what? Yes to all of it! Let’s explore how to consume this data.

Option 1

The first step to learning how to consume this data is to start at probably the most basic of levels. As you have already noted, this is a stored procedure. It was probably the sp_ in the name that gave it away, right? Since it comes in the form of a stored procedure, it makes it pretty easy to figure out the first means to get the data from it—just execute it. Actually, let me clarify that just a bit. sp_server_diagnostics is a CLR stored procedure. Nonetheless, all that needs be done is execute it at this level to get to the data.

Look at that! Pretty slick! I execute the stored procedure and I can see a point in time result of the “health check” from this component of the black box recorder. Notice here that I only have five of the six categories that I had mentioned. This is due to the lack of a configured availability group (AlwaysOn) from this particular instance. Another point of interest is that the data column is not in a very friendly format for DBA consumption. This is flat text that just goes on and on. It is painful to consume in this manner.

Tidy that Code

Looking at the code, you can see quickly the difference. I have changed the data column to be XML-based to better match the data being output from the stored procedure. Let’s see that in action:

That is much better. Now I can simply click one of the data results and see a nicely formatted easy to read XML output of that component. Here is an example from the query_processing component:

Option 2

Cool! I have just shown one means of exploring this facet of the black box recorder. Now, let’s take a look at other means to get at this particular data, beyond just executing the stored procedure. Did you know that this procedure is very closely related to Extended Events? (I am sure that you picked up on that nugget due to reading the previous article in the series or possibly the other article I mentioned here). With such a close relationship to Extended Events, it makes sense that sp_server_diagnostics also has a couple of events and is consumed by multiple sessions.

SQL Server comes with various default Extended Event sessions, of those sp_server_diagnostics is consumed by the system_health session as well as the hidden session for SQLDiag that is used by Windows Clustering and Availability Groups to determine server health. You already heard about the system_health (from the previous article), but few have heard about the hidden session. I will take a look at the events as well as the sessions that consume the sp_server_diagnostics data.

Events

First, I will share the details for each of the events related to the sp_server_diagnostics process.

As I have shown here, you can see that these events are a part of the debug channel. In the next couple of images, I want to show some of the payload attached to these events.

These events show what I have shown previously with regards to output of the stored procedure. There is a consistency in the output of the diagnostics process. What I have yet to show in this data is one more piece of information that could be helpful down the road and that pertains specifically to the extended events data. That piece of information I need to share is the keyword associated with each of these events. When looking for related events, I need you to understand that the sp_server_diagnostics events are a part of the scheduling classification. There, I have said it. I just gave you a big clue to help you in your troubleshooting endeavors and the use of the diagnostics processes/events.

Which Sessions then?

Now that I have shown which events are related to diagnostics, I can finally show which sessions are dependent upon the diagnostics data. To show that kind of information, I have this next query to help:

When I execute the statements in the preceding query, I will receive a single result—system_health. That said, do not believe everything I just showed you with that query. SQL Server does a decent job at hiding some of the details. When I run the next query, I will see a little bit more information.

When I run the preceding query, I will see the following two sessions (given that I have not change the default settings):

Take note of the names of those two sessions. I showed that the system_health session is reliant upon the diagnostics events. Now, I am showing that there is another event session called “sp_server_diagnostics session“. Wait, there is more! SQL Server continues to hide stuff from us.

Hidden Sessions

When you have AlwaysOn configured, there is an additional hidden session that is not visible within the sessions lists inside of SQL Server. To view this hidden session, you must follow the directions shown in this article on msdn. Once you have navigated to the folder containing the hidden XEL files, you can open them in Management Studio and view the data. It is while viewing the data that you will be able to see the relationship between sp_server_diagnostics and this newly discovered trace file.

The payload from this image demonstrates the relationship between sp_server_diagnostics and the hidden trace file. I challenge you to take some time to explore this hidden trace file on one of your servers where AlwaysOn is configured. I know you will be thankful for the investment at some point down the road.

Conclusion

I have just shown a ton of info about the third component of the black box recorder—sp_server_diagnostics. This procedure may be lesser known than the first two components, but it is highly integral into the overall monitoring and health of your SQL Server instance. In this article, I have not only shown how to use sp_server_diagnostics (albeit quickly), but I have also shown that the diagnostics are heavily relied upon by other components of the black box recorder. In addition, I have shown how this component is integrated into other hidden traces that are essential in the monitoring and troubleshooting of your instance.

The black box recorder is real. You need to invest time in getting to know the black box recorder components and how they can be of help to you in becoming a superhero DBA.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the sixth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

SQL Servers Black Box Recorder – system_health

Yesterday i shared the first article in a three part series about the SQL Black Box Recorder, that you can read here, with a promise of at least two more posts. The first article in the series laid the groundwork about the black box recorder in SQL Server. Furthermore, there was a brief introduction into the three components of the black box recorder. The three components as laid out in that article are:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

The first article went into deeper detail about the first leg of this three-legged black box recorder – the default trace. In this article, the focus will be directed to the next leg of the black box recorder – or the system_health Extended Event Session. If by some measure you are in the dark about what Extended Events is, then I recommend you read my personal blog series on the subject. There are numerous articles stepping through the subject in easy-to-digest fashion. You can find many of the articles in the following table of contents – here.

The focus of this article will not be to introduce the topic of Extended Events. Rather, it will be to introduce the system_health session and dive into it a fair bit.

SQL Black Box Recorder

What is the system_health session?

Beyond being a component of the black box for SQL Server, what exactly is this event session? The system_health is much as the name implies – it is a “trace” that attempts to gather information about various events that may affect the overall health of the SQL Server instance.

The event session will trap various events related to deadlocks, waits, clr, memory, schedulers, and reported errors. To get a better grasp of this, let’s take a look at the event session makeup based on the available metadata in the dmvs and catalog views.

In addition to the types of events I mentioned, there are also a few more interesting things to discover from the results of this particular query. For instance, the actions that are being employed help to gather even more information such as the callstack, tsql_callstack, database id, and sql text of the query that was executing at the moment of the event that is trapped.

One very peculiar event that is trapped is the sp_server_diagnostics_component_result event. Remember from earlier in this article that I noted the third leg of the black box record happens to be a very similarly named session – “sp_server_diagnostics procedure”. When I cover this third leg, it will make a little more sense. Understand, for now, that your suspicion of it being a bit more integral to the black box (than merely 1/3 of the overall black box) is confirmed.

This is a good set of information that can be trapped. Furthermore, this is a good set of information which can give a fairly good diagnosis of several different potential problems within your SQL Server Instance! (Yes it may seem repetitive, but I wanted to underscore the importance of the data that can be trapped.) That said, compare the types of events to those of the default trace. Take note of how many events actually overlap between the two components of the black box recorder.

For ease of comparison, let’s simplify that previous query to get a set of events to read without the duplication due to the actions on each event.

And since I am feeling particularly generous, I have lumped the events from the two traces together in the following query:

That should help to compare the 17 events from the system_health session and the 34 from the default trace with a little more ease and speed. The events from the two sessions are rather complimentary to each as they help to construct the SQL Server Black Box recorder.

How to use this session?

With the events trapped by this session, there are a few ways to use the trapped data that should stand out pretty easily. To me, the quick application of the data from this session can come from the review of deadlocks, reported errors, or even waits analysis. Using the deadlock as probably the most common problem people will want to troubleshoot, this information is suddenly much more valuable and easy to access (no more need for that trace flag). Since the system_health is now trapping the deadlock by default, one could quickly access the deadlock graph from the event data and proceed to troubleshooting. More information on getting that deadlock graph can be found here and here.

Learn to use the data made available through this member of the black box recorder. Becoming familiar with this tool (and additionally with Extended Events) will make you a rock-star in the office. The information needed to troubleshoot most problems is right at your fingertips through the use of the different pieces of the black box recorder. If it is not available there, then more than likely it can be made available through a bit of tinkering with Extended Events. Get to know the tools and be that rock-star DBA.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the fifth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

SQL Servers Black Box Recorder – Def Trace

Across many professions and industries there is often a need to have some sort of device that “audits” everything that happens with a device or process. We are probably all quite familiar with the infamous black box recorders used by the passenger airline and train industries. It is also quite possibly fairly common knowledge that many pace makers perform the same sort of functionality – on a smaller scale. Various metrics and data points are registered and recorded somewhere. Whether to a local storage device or whether the device phones home to some other remote location, it is recorded.

We often pontificate about the what-ifs for a black box recorder within SQL Server, right? We wish and dream and go about creating our own little recorder to suit the requirements we deem necessary for successful monitoring of the server for the just-in-case scenario. Well, the truth of the matter is that we really don’t need to go to such great lengths to create a “black box recorder” because Microsoft has already done all of that for us.

Wait, what? Yes, that’s right! Truth be told this shouldn’t be much news for most Senior Level data professionals working with SQL Server for the past few years. But if you are new to the product, this might be good news for you. Then again, it might just be bad news depending on your take.

Very much like what you may find with the airline industry, the black box recorder in SQL Server is not just a single method (device) implemented to capture all of the desired data points. On a passenger jet, you may find that there are three or more such devices that contribute to the capture and recording of the in-flight data. In SQL Server, there are three major processes that help capture our in-flight data. Over the next few articles I will discuss each of these processes. These processes include:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

This multi-tiered approach does provide a sort of fail-safe system. If one should be disabled, there still remain up to two more processes that may be running. That said, each of these can be disabled but it does take a bit of an overt effort. And since it does require somebody to put out effort to try and disable each of the black box components, you could potentially capture the culprit via audits from the other components or via an explicit audit that you create for these types of purposes.

Since this will be something to discuss over a few articles, I will break out each process into an individual article. For today, I will discuss the default trace.

Default Trace

The default trace by itself is something that can be turned off via configuration option. There may be good reason to disable the default trace. Before disabling the default trace, please consider the following that can be captured via the default trace. I will use a query to demonstrate the events and categories that are configured for capture in the default trace.

In this query, I have requested a few more data points than necessary to illustrate the point. That is to help illustrate an additional point that the default trace isn’t filtering out any data for these events. If the event fires (in this trace), it is recorded. Let’s divert right back to the events and categories for now. The results of that query will produce the following sample list of events for me on SQL Server 2014:

That is quite a range of events covered by this particular recording device. From changes to objects down to security related events and even errors and warnings. If somebody drops an object, the default trace can catch it. If one of the various DBCC statements is executed, it will be trapped in this trace.

There is one thing that is captured by this trace that is not overly obvious. In fact, it is rather annoying in how it is recorded in my opinion. Server configuration settings such as “cost threshold of parallelism” or the lesser utilized “user options” are not registered as an “Object:Alter” event but rather as an ErrorLog event. I wouldn’t call this type of “change” an error and don’t necessarily like seeing these registered with other legitimate errors. That said, it would be nice to see these logged differently (they are currently logged the same way with Extended Events). So, if somebody is dorking around with server configurations, it becomes a bit more convoluted to figure it out, but we can get there. Let’s see how that works:

Looking at my system, I get the following sample results:

You can see there are plenty of settings that I have been playing with captured by the default trace. If you happen to have a junior DBA or maybe a cowboy DBA that loves to shoot from the hip and make changes, this will help you find all the necessary details to bring to that DBA for discussion.

The default trace comprises one third of the black box recorder. I have shown a quick use for the default trace and have also shown some of the various events that are captured from this trace. I recommend getting to know your default trace just a little bit better. You never know when you may need to resort to the flight recorder data held in the default trace. Being familiar with the default trace before you need to use it will help improve your comfort level when under the stress of trying to figure out what happened just before the server went belly up!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the fourth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How to: File Target use in Extended Events

I have been doing a lot of posts of late about extended events but have failed to share some of the essential building blocks. I am not looking to do a series of definitions, but just share a couple of quick tips and examples here and there that will make the use of extended events a little easier to undertake.

The first one to tackle is the use of the file target. Using a file target is not difficult from a creation point of view (event session creation). But, some basics are critical or you could end up with fewer follicles.

Let’s take a quick peek at some of these tips.

File System Prep

This first little tip comes from a painful experience. It is common sense to only try and create files in a directory that exists, but sometimes that directory has to be different on different systems. Then comes a little copy and paste of the last code used that worked. You think you are golden but forgot that one little tweak for the directory to be used. Oops.

Take this example session. When you run this code, you will not see an error (don’t change anything).

However, if you try to start the session, you will receive a pretty little error. Here is the code to try and start the session, along with it’s associated error.

Msg 25602, Level 17, State 22, Line 25
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 3: ‘The system cannot find the path specified.
‘ while creating the file ‘C:\Database\XEs\FileTargetDemo_0_130670484928250000.xel’.

If you double check in the SSMS GUI, you will see that the session is there. It is just stopped. You can also check that the session exists if you recall my query from a previous post where I show the status of the session.

If you take it a step further and try to read the log file that should be created, you will be greeted once more with a reminder that there is a problem. Try this (sans change again).

And you will receive the following message:

Msg 25718, Level 16, State 3, Line 31
The log file name “C:\Database\XEs\FileTargetDemo*.xel” is invalid. Verify that the file exists and that the SQL Server service account has access to it.

The beauty here is that all is not lost. These are easy problems to get around and resolve. All it takes is to adjust the file path to be the correct path or a path that exists (to which you have permissions).

Read the Log File

If you have been reading this article, then this might seem like a bit of redundancy. That is by design. DBAs love redundancy—well at least when it means there is a means to recovery due to the redundancy.

Unlike reading from the ring_buffer, when you read from the file target, you will use different code. The file target is stored in a binary representation on disk that a function will produce as XML format for human readability. So to read a file target, you would need to do something like the following.

And you definitely do not want to try the following with a file target. That is unless of course you are looking to go bald at an earlier than expected age in this profession.

If you find yourself doing this version, all you will get for results is a short xml string similar to the following:

This is not what you would be looking for when trying to read the event session data. That said, it at least provides some information that could be of use. You will see that buffers are tracked and the file name is tracked. For this last demo, I corrected the FileTargetDemo session to use a directory on the file system that existed and to which I had the appropriate level of permissions.

I just covered two quick tips to help make life a little bit easier when dealing with file targets and extended event sessions. I will try to continue little tips like this over the coming months. I hope you have found this useful and that you will indeed be able to put it to use.

 

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in an article demonstrating the power of XE over Profiler? Check this one out!

This has been the third article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

Upgrading From SQL Server Profiler

I just shared an article describing how to use Extended Events to perform your SQL Server Profiler duties. If not, you can read all about it here.

In that article, I showed one common use-case for SQL Server Profiler and how to achieve the same result via Extended Events. What I didn’t show in that article was how to correlate all of your favorite trace/profiler options into the shiny new XE equivalents. Is there even a way to do that?

As luck would have it, there is a means to correlate trace/profiler events to Extended Event events (yeah that sounds weird).  With the release of SQL Server 2012, Microsoft introduced a couple of catalog views to help with this correlation effort. Those views have a pretty straightforward naming convention. For example, one of the views is named sys.trace_xe_event_map while the other is sys.trace_xe_action_map. For this article, I will be focusing on the former.

Making the Upgrade

When looking to finally make the upgrade away from trace/profiler, a big key is to figure out if your favorite trace events are even an option at the next level. I have talked about finding events in XE in previous articles by trying to search for a specific topic that might apply to your current condition.

But if you are already familiar with specific trace events, you may just want/need to know what the new name is in XE. This is where those catalog views come into play. And in support of that, here is a query that can help in that upgrade effort:

Looking through the results, one would notice that not every trace event maps to an Extended Event event. The events that don’t map, in general, deal with audits (which is actually driven by Extended Events).

This is a really good start to getting you on your way to that much needed upgrade away from profiler/trace. What if there are several traces that are already in use or scripted for various reasons in the environment?

Decision Time

If you happen to have traces that are already deployed in the environment or in script form, it can be a bit of a tedious pain to convert those to Extended Events. Do you manually recreate the traces as XE sessions? Do you abandon the upgrade due to the effort and annoyance it will create? Or do you find some automated means of performing the conversion?

Of those three options, only two are valid options. Those options involve performing the conversion of the traces to XE sessions. There are pros and cons for each. You may opt to take your time and learn more about Extended Events by performing the manual upgrade, or you may choose to save time by using an automated routine.

Should you decide to try the automated routine, there is one already out and available to help you on your way. Jonathan Kehayias wrote the script and you can download it here.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This has been the second article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How To: XEvents as Profiler

A common excuse for not delving into extended events seems to be the jump from Profiler to extended events.  There appears to be an inherent fear with how to use extended events.  In addition to that fear, is the fear that extended events does not do what Profiler can do so easily.

Today, I would like to share a short tutorial on how to use extended events as if it were a profiling session.  I am only going to show one of the many possible means to profile via extended events.  Just understand that this means you have flexibility and a way to do more with this simple example than what I am going to share.

Requirement

You need to capture TSQL queries that are occurring against your instance in order to try and determine the code that is being passed to SQL Server from the application.

How-To

Rather than jump to Profiler, you are going to need to jump to a fresh, clean query window in SSMS.  Just like in Profiler, you will want to capture certain “statement” oriented events.  For this session, I want to start with sp_statement_starting and sql_statement_starting.  I am not going to use the _completed forms of those events because I want to capture as much as I can – without capturing too much noise.  Sometimes, there may be a _starting without a coordinated _completed and that could throw a wrench in the works.

With the basic information in hand, we are ready to implement some of the things learned in previous XEvents posts (here and here).  Let’s go ahead and start setting up the session that will capture the information we seek.

One caveat to consider is in the sp_statement_starting event, I have specified a specific set option be used.

This particular option is disabled by default.  If you want to know the object_name of code that is being used (triggers, stored procedures, etc), then this option must be enabled.

I have specified the same actions to be used for each of the events I am trapping, but this is purely up to you.  If you do not wish to capture these actions, then you can remove them or add different actions as necessary.  I have also specified a different predicate for each of the events.  This is something that works so much better in XEvents than Profiler – the filtering that can be applied is much more flexible.

Once the session is started, then it is merely a matter of figuring out how to look at the data.  I generally use TSQL to parse the data from the file, but you could also fiddle with the GUI as well.  You can access that by right-clicking the event session in management studio and then selecting “Watch Live Data”.  If everything is configured properly, then you will start to see data after the first event is triggered.  From within the GUI, you can pick and choose which columns to display in your view.  You can pause the display of the session data, or you can even “erase” the data from the current view (just like in Profiler).  You can even filter, aggregate or group the data in the view from the GUI.

The use of the GUI is up to you.  I still prefer to use the script route.  With that route, here is a sample of what you may need to write in order to display the data from the session data file that has been captured.

It really is that simple.  Now you have a means to run a Profiler-like trace on your server without the impact of Profiler.  XEvents provides the means to run a more evolved Profiler session on your data to capture things like queries from an application.  XEvents does all of this without the severe penalty of Profiler and with many additional bonuses.  Try it out and enjoy!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This is the first article in the 2018 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Quick and Easy XE for Azure DB

The Cloud

It has been a minute since I gave much love or attention to Extended Events in Azure SQL DB. Things have changed a touch since then. We will see about some of those changes in a future article. As for this time, let’s focus on how to get a session in Azure SQL DB up and running really quick and easy.

Create a Session

I am going to keep this as easy as possible for creating a session and thus I will demonstrate how to do it from the GUI. And for those more in tune with their scripting side, there is a future article on that as well as a glimpse of a script near the end of this article.

Recall from the previous article that things in Azure SQL DB are different for Extended Events. XE is database scoped rather than server scoped (as is the case with your on-premises servers). Due to this change, finding the GUI for XE is a little different.

In order to find the GUI for XE, you must drill down into the database and then you will see “Extended Events.” (Side note, there is no XE Profiler for Azure SQL DB as of this writing.) If you right-click Sessions, you will get a menu with the option to create a New Session.

After clicking “New Session…” the familiar window for a new session will appear.

Here, you see a session that I started creating already. For this session, I opted to go with the template called “Query Detail Tracking” (more on templates for Azure SQL DB in a future article). I also like to enable “Causality tracking” so that checkbox is ticked in this example. The template has the events I wish to capture already and I am only going to use the ring buffer (file target has some additional requirements for an Azure SQL DB session and should be treated in an entirely separate article) so I will skip those screens straight to the advanced screen before completing this session.

From the advanced screen, all I want to do for now is to decrease the dispatch latency from the default 30 seconds down to 10 seconds. For this session, I just want to capture the events quickly in the target and be able to review them as soon as possible. I won’t be able to utilize the “Watch Live data” option for this session (as shown below), so a quick dispatch is essential.

After I have selected all of the configurations I desire for this session, then I click the script button at the top of the window. From there, I will select “New Query Editor Window.” Once scripted, I will receive the following script for this particular session.

Notice that there is nothing about starting this session in the script to start the session.

Note that “on database” has been specified instead of the traditional “on server” for the session state command. In order to stop the session, it is just as easy as follows.

Conclusion

This has been a very simple introduction into the creation of an Extended Event session using a template for Azure SQL DB. I demonstrated the use of the GUI to configure the session quickly and then to subsequently script that configuration before creating the session.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This is a quick pre-cursor to the 2018 “12 Days of Christmas” series. The series will begin on December 25th (the first day of Christmas) and you can easily follow it via this page.

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

Encrypting a Database Never Finishes

Categories: News, Professional, Security, SSC
Comments: No Comments
Published on: December 20, 2018

There is plenty of legislation and regulation in place these days that strongly suggest the encryption of data within a database.  In SQL Server, we have the ability to comply with these regulations in a couple of different ways.  We can choose to encrypt at the column level or we can choose to encrypt the entire database using TDE (transparent data encryption).

Given these opportunities and methods of encrypting a database and protecting data, some clients will want to comply with the regulation and move forward with encryption.  Every now and again, the client might run into something they can enable themselves.  TDE happens to be one of those things.  Enabling TDE is pretty easy.  Sometimes things run very smoothly and sometimes there is a problem.  Unfortunately, the problem may not always be immediately evident.

This is one of those times.  If you enable a setting in a database, you would expect it to be in effect and to be working.  Sometimes, with TDE, that change doesn’t take effect. What actually happens is the switch is thrown, the database tries to comply, but something prevents the database from completing the directive it was just given.  As luck would have it, I had the opportunity to assist recently with such a case.

Infinitely Encrypting a Database

Before diving too far down into the problem, let’s take a look at how we can gauge and measure the encryption progress.  Microsoft has provided a dynamic management view to see and track the status and progress of encryption that we have tried to enable.  This view is sys.dm_database_encryption_keys.  Within that view there are a few important and useful fields.  One field is the encryption thumbprint, another is the encryption state, and another tells us the percent that has been completed.

With a little tsql from the toolbelt, we can pull back the essential information to monitor the encryption progress.

You’ll notice that I have a case statement in there to transform the encryption_state values into a human friendly form.  Sadly, the human friendly values are not stored in the database and are only documented in MSDN / BOL.

Now on to the problem at hand—the encryption that starts but never ends.  Given that I have a database named “Published”, I can enable encryption with the following script.

After issuing the encryption command, I can then pull up my “status” script and see how things are rolling.

In this example, and for the sake of this article, this database will remain at 0% complete for days and months if allowed to remain that way.  Notice that it still thinks it is progressing, but nothing is actually getting done.  This kind of result can also be seen after a database reaches 99% complete.  At some point, something occurred that caused grief for the encrypting process.  As soon as that happens, the report will revert back to 0% complete and just stall indefinitely.  The types of things that can cause this vary from corruption to index rebuilds occurring at the same time as the attempt at encryption.

When a stall such as this occurs, the fix could be as simple as restarting the encryption with the statement previously used.  Here that is again (bearing in mind to substitute the appropriate database name for the one I used in this demo):

In some cases, that won’t work.  And sometimes you just may want to proceed very cautiously.  In those events, you can get your database back on track to being encrypted by using the following:

The use of this trace flag forces the page scan, used to encrypt the database, to stop (even though it already appears to have stopped).

If I check my encryption status again, I should see the percentages growing until 100% is achieved and eventually the state that declares the database as being encrypted.  This is demonstrated in the following three screenshots.

And finally we have an encrypted database.

Conclusion

Transparent Data Encryption is useful in helping get a database into compliance with several regulations.  It is also fairly easy to implement.  Occasionally, it may only appear to have completed successfully.  When the SQL statement completes, TDE is not entirely in place.  Though it is “enabled”, it may still be waiting for the encryption phase to complete.  Once the statement to encrypt the database has been issued, it is important to follow up with a bit of monitoring to ensure the database progresses to the encrypted state.  If that doesn’t happen, you could end up in a state where you think the database is encrypted and it actually just stalled.

If you enjoy reading about security and encryption, check out these articles related to security and audits.

T-SQL Tuesday Participation Over the Years

Comments: No Comments
Published on: December 19, 2018

Loads of TSQL Tuesdays

TSQL2sDay150x150

Years ago the TSQL Tuesday party was started by Adam Machanic (b|t). The premise of the monthly event is to get people a topic each month to try and ease some of the difficulty with writing (figuring out what to write) and to get more involved in the community. The party started in December 2009 and is now embarking on the 10th year. The party is running strong and will likely continue for many years to come.

I have personally participated in quite a large number of these events. I am far from a perfect participation record like Rob Farley (b | t) but I do participate when I can. Every now and again I think about what I might have participated in or what I have written in the past for a TSQL Tuesday. This post will serve as a table of contents for all of those articles. This is to help make it easier to find for me (as well as give me a chance to review and fix problems from really old posts). This post will provide a quick synopsis of my article (if one exists) along with topic and links to original invite and roundups for the monthly party.

T-SQL Tuesday #001: Date/Time Tricks (Dec. 2009)

Invite and roundup

My Article: N/A

T-SQL Tuesday #002: A Puzzling Situation (January 2010)

Invite and roundup

My Article: TSQL Tuesday – But I was late

This was the first time I participated in TSQL Tuesday. I was very new to the blogging concept – and it shows. The puzzling problem I encountered in the article was a fun little job that kept giving different results than running the same code from SSMS. Check it out!

T-SQL Tuesday #003: RELATIONSHIPS (February 2010)

Invitation and summary

My Article: Relationships

I covered all sorts of different relationships that affect SQL Server from work, to marital, to table relationships in a database. It all ties together in requiring an individual to constantly tune the different types of relationships.

T-SQL Tuesday #004: IO (March 2010)

Invitation and roundup.

My Article: IO IO IO

What if we could create a numbers table without IO?  What if we could perform several different kinds of queries without IO?  Itzik Ben Gan proposed a cascading CTE solution that does this sort of thing. This article shows that method in use to help improve certain IO conditions.

T-SQL Tuesday #005: Reporting (April 2010)

Invitation and roundup.

My ArticleIP and Default Trace…T-SQL Tuesday #005

Having been introduced to a requirement to report on IP addresses of connections, I dive into a solution that will help show hostname, ip address and a few more tidbits.

When it is necessary to provide reports on activity occurring on the server, it pays to do a little prep work.  Be Prepared.  It is not an easy task to be able to go back in time and report on data that isn’t captured.  The little prep work that one may need to do is well worth the effort in the end.

My 2nd ArticleBLOB Report T-SQL Tuesday #005- Reporting

Learning how to decrypt the individual object sizes broken down into type and filegroup. Then report on the data that was retrieved from the DMOs.

T-SQL Tuesday #006: “What About BLOB?” (May 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #006: A Blobbing We Will Go

In the DMVs for SQL 2005 and SQL 2008 there is more than one place you can find the information about the size of your LOB data?  And it goes without saying that there is more than one way to find information about LOBs in your database.

T-SQL Tuesday #007 – Summertime in the SQL (June 2010)

Invitation and roundup.

My Article: Data Compression

I see database compression, as offered with SQL 2008, to be more like these file compression utilities than DriveSpace.  Data compression in SQL 2008 is not an all or none implementation.  You get to pick and choose what gets compressed.  That is a big time bonus for me.

T-SQL Tuesday #008: Gettin’ Schooled

Invitation and roundup.

My ArticleGettin’ Skewled

I am learning that learning is not just formalized education in a classroom or in specific settings.  There are things to be learned from all aspects of life.  This can be learned if only a little observation is used.

T-SQL Tuesday #009: Beach Time (August 2010)

Invitation and roundup.

I hosted this particular event.

My ArticleR & R

I find it completely useless to go on vacation if I am going to be checking email or project statuses every 10 minutes.  There is no rest or relaxation in doing those things while I am supposed to be doing something else.  Vacation should be fun and enjoyable.  Thus, if I am to enjoy vacation, I need to do a few extra things in the office prior to leaving.

T-SQL Tuesday #010 – Indexes (September 2010)

Invitation and roundup.

My ArticleTSQL Tuesday Indexes and Blobs

How does one find what columns were LOB columns in the database.  I knew I had some past blog posts about various aspects of LOBs, but I had never broken it down to find the columns in the LOB.  Even better was that I wanted to know what columns were in what index that were also a BLOB.

T-SQL Tuesday #011 – Misconceptions in SQL Server (October 2010)

Invitation and roundup.

My ArticleA Haunting TSQL Tuesday Tale

I chose the myth that truncate is unrecoverable. Check it out and see how a truncate is most certainly capable of a rollback or being recoverable.

T-SQL Tuesday #012 – Why are DBA skills necessary? (November 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #012 – Skills

As a DBA, we occasionally have the opportunity of attracting a new client or a new job or a new database.  It seems that more often than not, some of the skills requisite (at least they should be) of owning a database are missing. (Nunchuku skills could come in handy from time to time too!)

T-SQL Tuesday #13 – What the Business Says Is Not What the Business Wants (December 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #13 – Business Requirements

I think a common area that is easily overlooked when it comes to requirements and interpretation of requirements is report creation.  A common problem is that there are no defined or written requirements for the creation of a report.

T-SQL Tuesday #014 – RESOLUTIONS (January 2011)

Invitation and roundup.

My ArticleTSQL Tuesday 14: Committed

This month was all about resolutions and goals. My list of goals were pretty good and entirely profession based.

T-SQL Tuesday #015 – Automation in SQL Server (February 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #15 DBA Automaton

I shared a script that will load a sample of data from every table in every database for every column and give you the length of the pertinent columns (I have excluded obvious columns such as numeric types and certain LOB types).

T-SQL Tuesday #016 – Aggregate Functions (March 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #016: Aggregates and Statistics

Super geeky article on stats, quartiles and the like. For my data analysis and trending, I wanted to find a simple distribution across quartiles.

T-SQL Tuesday #017 – APPLY Knowledge (April 2011)

Invitation and roundup – unavailable on origin site.

My ArticleT-SQL Tuesday #17 – APPLY Knowledge

In this article I take an unexpected twist in my application of the APPLY operator. In that twist, I show a quick and simple script to grab role members.

T-SQL Tuesday #018 – CTEs (May 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #18 – CTEs

I shared a rather complex series of CTEs that are used in the same way a hierarchy and ancestry tree would be generated – except for Foreign Key relationships. That script was a load of fun.

T-SQL Tuesday #019 – Disasters and Recovery (June 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #19 – Disasters & Recovery

Who needs a single disaster when you can enjoy multiple disasters in a single sitting? I share three different types of disasters. I am sure many have had experiences with all three and probably many more types of disasters.

T-SQL Tuesday #020 – T-SQL Best Practices (July 2011)

Invitation and roundup.

My Article: N/A

I missed the announcement or something like that.

T-SQL Tuesday #021 – A Day Late and Totally Full Of It. (Aug. 2011)

Invitation and roundup.

My ArticleTSQL Tuesday 21 – FAIL FAIL FAIL

I don’t hide much as I share my feelings about crap code and then show my own crap code followed by a little bit of explanation on an improved version of the same code.

T-SQL Tuesday #022 – Data Presentation (September 2011)

Invitation and roundup

My ArticleT-SQL Tuesday #22 – Data Presentation

I chose to touch on several aspects of data presentation – Performance, Accuracy, Display, and Business Requirements.

T-SQL Tuesday #023 – Joins (October 2011)

Invitation and roundup.

My Article: N/A

D’oh – missed another one in 2011.

T-SQL Tuesday #024 – Prox ‘n’ Funx (November 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #024: Prox ‘n’ Funx

Big takeaway from this month was the value of a very good string splitter.

T-SQL Tuesday #025 – Invitation to Share Your Tricks (Dec. 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #025 – Holiday Gifts

In the spirit of the Holidays, I wanted to share some tricks and tips.  They can be my gifts to you during the holidays.  And maybe they can help you give to somebody else.

T-SQL Tuesday #026 – Second Chances (January 2012)

Invitation and roundup. (Original site toasted).

My ArticleTSQL Tuesday #26 or #23 – Identity Crisis

Having missing TSQLTuesday 23 and the topic being about Second Chances, I chose to write about JOINs and sort of fulfilling the requirements for both TSQL Tuesdays in one shot.

T-SQL Tuesday #027 – Invitation to The Big Data Valentine’s Edition (February 2012)

Invitation and roundup.

My Article: N/A

After that second chance I blew it the very next month. Luckily I have a good reason – Valentines Day!

T-SQL Tuesday #028 – Jack of All Trades or Master of None (March 2012)

Invitation and no roundup.

My ArticleT-SQL Tuesday #028 – Jack of All Trades, Master of None?

This one brings up bad memories of when as the DBA it was also required to be the janitor.

 

More updates coming soon!

What is T-SQL Tuesday?

T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Steve Jones via the tsqltuesday website – here.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday and 00:00 GMT Wednesday.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.
«page 2 of 53»

Calendar
January 2019
M T W T F S S
« Dec    
 123456
78910111213
14151617181920
21222324252627
28293031  

Welcome , today is Saturday, January 19, 2019