Database Corruption and IO Errors

A quick way to have your day turned upside down and rip your gut out with nerves and anxiety is to come in one day to find that users are panicked, applications are not working and the HelpDesk team is curled up in the fetal position in the corner. Why? The sky is falling and everybody thinks the database has blown up.

Calmly, you settle in and check the server and eventually find your way to the error logs to see the following:

Msg 823, Level 24, State 2, Line 1

The operating system returned error 1(Incorrect function.) to SQL Server during a read at offset 0x0000104c05e000 in file ‘E:\Database\myproddb.mdf’. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Suddenly you understand and feel the collective fear and paranoia. What do you do now that the world has seemingly come to an end for your database?

Corruption

What exactly does this error message mean? Well, typically, an 823 error is a very strong indicator that there is some sort of problem with the storage system, hardware or driver that is in the path of the I/O request.

Great! That fear is getting a little heavier knowing what the error represents. This doesn’t bode well for the database. Let’s go ahead and crack out the list of what we can do or check when a problem like this hits:

  1. Check msdb.dbo.suspect_pages
  2. Run a consistency check for all databases on the same volume
  3. Check Logs (SQL, Windows, Storage system) to see if there may be additional info (via different errors/warnings) in close proximity to the 823 error.
  4. Check your drivers
  5. Restore the database

This is where your experience, training, and preparedness come in handy. An experienced data professional will be prepared with database backups (including log backups). So you are not concerned here because all of your backups are reporting successful. As you prep to pull the backups (for the past couple of days just in case) you notice that there are no available backups in your repository. Looking closer at your backup jobs you discover that the backups completed in mere seconds where they normally take hours for this database.

Now that your heart is racing, forehead is beading up with sweat, gut is sinking and the fear is very palpable – what do you do? Time to step through the rest of the steps and pull out your lucky charms, right?

Querying against suspect_pages, you find the table to be completely empty. You know that checkdb runs regularly but maybe it didn’t run last night. That is easy enough to check with a little query from here. Since a consistency check does not seem to have run (as confirmed by the script) and is the second item on the checklist, let’s go ahead and run it now.

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Msg 0, Level 20, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Crud. Blood pressure and nerves are getting a little more frazzled now. Maybe we can cycle through the database and find which table is causing the problem. Let’s try a checktable of every table in the database. Before doing the checktable, one more check against suspect_pages still shows no rows to be found.

Running the checktable, every table is coming up clean except one. That one table produces the same sort of error as the checkdb and just so happens to be the largest and most critical table to the database. Blood pressure is closing in on critical now. We have a corruption issue that is severe enough that checktable cannot complete, we know we have 823 errors and some sort of IO issue and do not have a backup.

Wait…backup. Let’s try to force a backup and see what happens. We can tell the backup to continue after error so let’s see what happens. Maybe that will allow you to move the database to a different server or different spindles to try and just recover the data.

Msg 3202, Level 16, State 2, Line 1

Write on “E:\SQLBackups\myproddb.bak” failed: 1(Incorrect function.)

Msg 3013, Level 16, State 1, Line 1

BACKUP DATABASE is terminating abnormally.

The situation just does not want to get any better at this point. Time for drastic measures – shut down the SQL Server services and try to xcopy the data and log files to a different server and try to re-attach from there. Anything is worth a shot, right?

Error 1: Incorrect Function

Glad to know it is not just SQL Server throwing the errors – sorta. The corruption is ever present and there is nothing more that can be done, right? All hope is lost. Time to fill out the resume and move on to something else, right? Give it one more shot. A spark of insanity hits and you wonder if a mere query against the clustered index will work, if not then maybe something against any of the indexes to try and spare any data at all.

You rub your luck horseshoe and query the table (yes the table that checktable aborts because of corruption). Lo and behold you get results from this very simple query. How is that possible. On a whim, you drop all the Non-Clustered Indexes and try a fresh backup.

Hallelujah! The backup completes without error. Time to take this backup and restore the database to a completely different server. Then on the new server run a consistency check to determine if it is all clear. To your liking, there is absolutely no corruption at this point so the non-clustered indexes can be recreated (easy to do because you have a script with the index definitions handy).

Wrap

This journey from fire and brimstone and the world ending to the epiphany and then the sweet euphoric feelings of success is not a normal resolution for these types of errors. This sort of thing happened for a client that called trying to get around the 823 errors. We had absolutely no indication whatsoever of where the corruption was beyond knowing we had failing disks at the time. We got lucky in that the non-clustered indexes in this case ended up being stored on the bad sectors and dropping those indexes allowed us to recover the database and make it usable.

When everything is breaking, the unconventional thought (especially without having a checkdb complete to tell you which index or which pages are corrupt) of dropping indexes may just save your bacon. It saved my clients bacon!

As a follow-up item, it is important to regularly check the dbccLastKnownGood for each database. That can be done by following the script in this article.

SQL Server Configurations – Back to Basics

One thing that SQL Server does very well is come pre-configured in a lot of ways. These pre-configured settings would be called defaults. Having default settings is not a bad thing nor is it necessarily a good thing.

For me, the defaults lie somewhere in the middle ground and they are kind of just there. You see, having defaults can be good for a horde of people. On the other hand, the default settings can be far from optimal for your specific conditions.

The real key with default settings is to understand what they are and how to get to them. This article is going to go through some of the basics around one group of these defaults. That group of settings will be accessible via the sp_configure system stored procedure. You may already know some of these basics, and that is ok.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Some Assembly Required…

Three dreaded words we all love to despise but have learned to deal with over the past several years – some assembly required. More and more we find ourselves needing to assemble our own furniture, bookcases, barbecue grills, and bathroom sinks. We do occasionally want some form of set and forget it.

The problem with set it and forget it type of settings (or defaults) is as I mentioned – they don’t always work for every environment. We do occasionally need to manually adjust settings for what is optimal for that database, server, and/or environment.

When we fail to reconfigure the defaults, we could end up with a constant firefight that we just don’t ever seem to be able to win.

So how do we find some of these settings that can help us customize our environment for the better (or worse)? Let’s start taking a crack at this cool procedure called sp_configure! Ok, so maybe I oversold that a bit – but there is some coolness to it.

Looking at msdn about sp_configure I can see that it is a procedure to display or change global configuration settings for the current server.

If I run sp_configure without any parameters, I will get a complete result set of the configurable options via this procedure. Let’s look at how easy that is:

Ok, so that was exceptionally easy. I can see that the procedure returns the configurable settings, the max value for the setting, configured value, and the running value for each setting. That is basic information, right? If I want a little more detailed information, guess what? I can query a catalog view to learn even more about the configurations – sys.configurations.

That query will also show me (in addition to what I already know from sp_configure) a description for each setting, if the setting is a dynamic setting and whether or not the setting is an advanced configuration (and thus requires “show advanced options” to be enabled). Pro-tip: The procedure just queries the catalog view anyway. Here is a snippet from the proc text.

Seeing that we have some configurations that are advanced and there is this option called “show advanced options”, let’s play a little bit with how to enable or disable that setting.

With the result (on my system) being:

We can see there that the configuration had no effect because I already had the setting enabled. Nonetheless, the attempt to change still succeeded. Let’s try it a different way.

I ran a whole bunch of variations there for giggles. Notice how I continue to try different words or length of words until it finally errors? All of them have the same net effect (except the attempt that failed) they will change the configuration “show advanced options”. This is because all that is required (as portrayed in the failure message) is that the term provided is enough to make it unique. The uniqueness requirement (shortcut) is illustrated by this code block from sp_configure.

See the use of the wildcards and the “like” term? This is allowing us to shortcut the configuration name – as long as we use a unique term. If I select a term that is not unique, then the proc will output every configuration option that matches the term I used. From the example I used, I would get this output as duplicates to the term I used.

Ah, I can see the option I need! I can now just copy and paste that option (for the sake of simplicity) into my query and just proceed along my merry way. This is a great shortcut if you can’t remember the exact full config name or if you happen to be really bad at spelling.

The Wrap

In this article I have introduced you to some basics in regards to default server settings and how to quickly see or change those settings. Not every environment is able to rely on set-it and forget-it type of defaults. Adopting the mentality that “some assembly is required” with your environments is a good approach. It will help keep you on top of your configurations at the bare minimum. This article will help serve a decent foundation for some near future articles. Stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Common Tempdb Trace Flags – Back to Basics

Once in a while I come across something that sounds fun or interesting and decide to dive a little deeper into it. That happened to me recently and caused me to preempt my scheduled post and work on writing up something entirely different. Why? Because this seemed like fun and useful.

So what is it I am yammering on about that was fun?

I think we can probably concede that there are some best practices flying around in regards to the configuration of tempdb. One of those best practices is in regards to two trace flags within SQL Server. These trace flags are 1117 and 1118. Here is a little bit of background on the trace flags and what they do.

A caveat I have now for the use of trace flags is that I err on the same side as Kendra (author of the article just mentioned). I don’t generally like to enable trace flags unless it is very warranted for a very specific condition. As Kendra mentions, TF 1117 will impact more than just the tempdb data files. So use that one with caution.

Ancient Artifacts

With the release of SQL Server 2016, these trace flags were rumored to be a thing of the past and hence completely unnecessary. That is partially true. The trace flag is unneeded and SQL 2016 does have some different behaviors, but does that mean you have to do nothing to get the benefits of these Trace Flags as implemented in 2016?

As it turns out, these trace flags no longer do what they did in previous editions. SQL Server now pretty much has it baked into the product. Buuuuut, do you have to do anything slightly different to make it work? This was something I came across while reading this post and wanted to double check everything. After all, I was also under the belief that it was automatically enabled. So let’s create a script that checks these things for me.

Holy cannoli batman – that is more than a simple script, right? Well, it may be a bit of overkill. I wanted it to work for version before and after and including SQL Server 2016 (when these sweeping changes went into effect). You see, I am checking for versions where the TF was required to make the change and also for versions after the change where the TF has no effect. In 2016 and later, these settings are database scoped and the TF is unnecessary.

The database scoped settings can actually be queried in 2016 more specifically with the following query.

In this query, I am able to determine if mixed_page_allocations and if is_autogrow_all_files are enabled. These settings can be retrieved from sys.databases and sys.filegroups respectively. If I run this query on a server where the defaults were accepted during the install, I would see something like the following.

You can see here, the default settings on my install show something different than the reported behavior. While autogrow all files is enabled, mixed_page_allocations is disabled. This matches what we expect to see by enabling the Trace Flags 1117 and 1118 – for the tempdb database at least. If I look at a user database, I will find that mixed pages is disabled by default still but that autogrow_all_files is also disabled.

In this case, you may or may not want a user database to have all data files grow at the same time. That is a great change to have implemented in SQL Server with SQL 2016. Should you choose to enable it, you can do so on a database by database basis.

As for the trace flags? My query checks to see if maybe you enabled them on your instance or if you don’t have them enabled for the older versions of SQL Server. Then the script generates the appropriate action scripts and allows you to determine if you want to run the generated script or not. And since we are changing trace flags (potentially) I recommend that you also look at this article of mine that discusses how to audit the changing of trace flags. And since that is an XEvent based article, I recommend freshening up on XEvents with this series too!

The Wrap

In this article I have introduced you to some basics in regards to default behaviors and settings in tempdb along with some best practices. It is advisable to investigate from time to time some of these recommendations and confirm what we are really being told so we can avoid confusion and mis-interpretation.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Changing Default Logs Directory – Back to Basics

Every now and then I find a topic that seems to fit perfectly into the mold of the theme of “Back to Basics”. A couple of years ago, there was a challenge to write a series of posts about basic concepts. Some of my articles in that series can be found here.

Today, my topic to discuss is in regards to altering the default logs directory location. Some may say this is no big deal and you can just use the default location used during install. Fair enough, there may not be massive need to change that location.

Maybe, just maybe, there is an overarching need to change this default. Maybe you have multiple versions of SQL Server in the enterprise and just want a consistent folder to access across all servers so you don’t have to think too much. Or possibly, you want to copy the logs from multiple servers to a common location on a central server and don’t want to have to code for a different directory on each server.

The list of reasons can go on and I am certain I would not be able to list all of the really good reasons to change this particular default. Suffice it to say, there are some really good requirements out there (and probably some really bad ones too) that mandate the changing of the default logs directory to a new standardized location.

Changes

The logs that I am referring to are not the transaction logs for the databases – oh no no no! Rather, I am referring to the error logs, the mini dumps, and the many other logs that may fall into the traditional “logs” folder during the SQL Server install. Let’s take a peek at a default log directory after the install is complete.

I picked a demo server that has a crap load of stuff available (and yeah not so fresh after install) but where the installation placed the logs by default. You can see I have traces, default XE files, some SQL logs, and some dump files. There is plenty going on with this server. A very fresh install would have similar files but not quite as many.

If I want to change the Log directory, it is a pretty easy change but it does require a service restart.

In SQL Server Configuration Manager, navigate to services then to “SQL Server Service”. Right click that service and select properties. From properties, you will need to select the “Startup Parameters” tab. Select the parameter with the “-e” and errorlog in the path. Then you can modify the path to something more appropriate for your needs and then simply click the update button. After doing that, click the ok button and bounce the SQL Service.

After you successfully bounce the service, you can confirm that the error logs have been migrated to the correct folder with a simple check. Note that this change impacts the errorlogs, the default Extended Events logging directory, the default trace directory, the dumps directory and many other things.

See how easy that was? Did that move everything over for us? As it turns out, it did not. The old directory will continue to have the SQL Agent logs. We can see this with a check from the Agent log properties like the following.

To change this, I can execute a simple stored procedure in the msdb database and then bounce the sql agent service.

With the agent logs now writing to the directory verified after agent service restart as shown here.

At this point, all that will be left in the previous folder will be the files that were written prior to the folder changes and the service restarts.

The Wrap

In this article I have introduced you to an easy method to move the logs for SQL Server and the SQL Server Agent to a custom directory that better suits your enterprise needs. This concept is a basic building block for some upcoming articles – stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

An Identity Crisis: Is it Profiler or is it Extended Events?

I have been working on this article for far longer than one might think is necessary for something of this nature. Truth be told, I was reluctant because there are so many people that quickly jumped on the wagon to write about this feature as soon as it was released. With some coercing, here the article is finally!

What is this newfangled feature? Well, that is kind of the problem. The name has already changed in the second release – be it ever so slightly. The feature is XE Profiler or XEvents Profiler and in theory it is supposed to give us something new in management studio.

Fact or Fiction

This Extended Events Profiler is a brand spanking new feature (ok, so the bouncing baby is a few months old now and no longer a newborn). Despite it being new, it seems there are a handful of “facts” already published about it that may or may not be accurate. Due to that, I want to play a little game of fact or fiction with it in this article.

Here is the short list from which we will work:

  1. Sessions (XE Profiler Sessions) are Customizable
  2. The default standard session displays all Extended Events
  3. It works with instances of SQL 2012 or greater only
  4. XE Profiler Adds ability to quickly start/stop sessions
  5. XE Profiler provides a new live view of the Event Session
  6. XE Profiler provides a GUI for Extended Events
  7. Templates were not available in XE until the release of XE Profiler
  8. Similarly, XE Profiler also brought the Profiler templates with it as part of the feature release
  9. A DBA can now do something that was not previously possible to do (quickly start a session)

That is a rather wealthy list of “facts” to be checked for validity if you ask me. Let’s go ahead and start diving into each of these. I will not be following the provided list order. Rather, I will be looking at items that seem appropriate to validate in my own special order. Rest assured tho, we will get through the list.

GUI

I don’t know how this is still a misconception about Extended Events, but it is certainly still out there. Suddenly XE Profiler comes along and people suddenly think that this tool has provided the GUI that everybody has been clamoring about since Extended Events was released in 2008 (yes almost 10 yrs ago).

I am not going to waste much time on this because it is a complete and total piece of fiction. The GUI has been readily and easily available since SQL Server 2012. In addition to that, there was a GUI that could have been installed for the older versions that was created by Jonathan Kehayias.

From SSMS 2014, here is a sample screenshot of the GUI wizard followed by a standard GUI screen.

As you can see, there is clearly a GUI prior to the release of SSMS 17.3.

Result: Fiction

Templates

This is probably the next easiest notion to prove or disprove. I have heard from more than a single source that templates are brand new with SSMS 17.3 (which is when XE Profiler was released).

I don’t really need to go very far or work very hard to prove this notion as a complete work of fiction. I have SSMS 2014, 2016, 17.0, 17.3, and 17.4 readily available to me. I am going to start with SSMS 2014.

If I  once again open the GUI for XEvents in SSMS 2014, I can easily see the available templates in that version.

It looks pretty obvious there that templates were introduced well before SSMS 17.3.

I can also see all of the templates available by browsing out to the file system.

And for good measure, here is one from a 2012 instance.

The same templates (minus possible changes within the template definitions) are available in SSMS 2012, 2014 and 2016. We will see those same templates available in SSMS 17 and above as well – in addition to a few that have been added.

I want to also add here that these are just the default templates. Just like in Profiler, you can create and save your own templates for future use.

Result: Fiction

Profiler Templates

We just saw how there are templates available prior to the release of SSMS 17.3. We can also see that the profiler templates are not available in those prior versions of SSMS. So does that also mean that the profiler templates were only released as a part of SSMS 17.3? We best be careful with that slippery slope of assumption. Let’s take a closer look at SSMS 17.0.

And just so we can confirm that this is indeed a version of SSMS that pre-dates 17.3…

As you can see there, this is RC3 of SSMS 17.0 so it is definitely a version that precedes 17.3

And similarly from a peek inside the folder on disk:

Seems to me we have ample evidence that the profiler templates were made available to the XE GUI well in advance of SSMS 17.3

Result: Fiction

Event Session Live View

One of the most widely used feature of profiler is the ability to watch the data as it streams through the session. According to many, this ability is missing in Extended Events and the XE Profiler feature has brought this ability to the forefront with XE now.

Let’s start by taking a look at an XE Session in SSMS 2014 connected to a 2014 database instance to check the validity of this concern.

All I need to do is browse the nodes in SSMS down to the session I want to view. Right click the session and then select the option to “Watch Live Data”.

After clicking watch live data, the event session live stream will appear on the right and start streaming the data to your SSMS workspace as shown in the following image.

For this session demo, I selected a session that should be readily available to everybody – the system_health default session. That session comes installed and running for you already.

Don’t like the layout of the screen? Well, you can customize the view to your liking and it can be customized differently for each and every different running session on your server.

As it turns out, the ability to watch livestream data has been available to you for Extended Events since 2012 when the GUI was made available. If you are curious about learning more about the GUI and customizing the view, you can read various articles I have written in my 60 day series you can find here.

Result: Fiction

That brings us to the halfway point. To recap we are now sitting at a score of Fact 0 and Fiction 4. Let’s see if we can turn the tide in these last 5.

Quick Start a Session

To be quite frank here, this has been a long time argument against using Extended Events. It just isn’t possible to start a session quickly and easily. I have two methods that I have shown in my training sessions concerning that concern. Let’s take a look at the first. I will demonstrate this from SSMS 2014.

The first thing that needs to be done is browse the SSMS tree until a session is found that you want to start. Here is an ancient cheat code – I leave many sessions on the server in the stopped state until I need to use them. Pro-tip: create your XE sessions on all of your servers and leave them stopped until you think you need to use it.

For this example, I have selected my AG_LeaseTimeout sessions which is in the stopped state (denoted by the little red “stop” icon next to the session name).

Then right click the session name and select “Start Session” from the context menu. To help clarify, I have circled the session name in green and the menu item in blue. This an extremely easy method to start a session. Once started, and if you really feel you must watch data stream onto the screen, then you can also follow the steps shown for watching the live stream data in the previous section.

The second method to quick start a session is only a touch more difficult. The steps are just the same as shown in the “template” section. You select a template that matches your desires and give the session a name. Here are some broad strokes shown in SSMS 17.0 (same steps apply for SSMS 2014 minus the ability to select a profiler template):

Right click the “Session” node and selection “New Session” from the context menu.

Select the “Standard” template from the dropdown box. This is a profiler equivalent (and also happens to be one of the two default sessions in “XE Profiler” so makes sense for a good comparison. Note that the window has a warning that the Session name cannot be empty, so let’s give the session a name.

With that warning cleared, we can now move on to the next configuration. Notice the two red arrows in the preceding image. Enabling these two options will start the session immediately and also give you that profiler warm and fuzzy by allowing you to immediately start watching the session without needing to lift another finger.

From here, click the OK button ( I know I normally teach people that they should script it, but in this case we are going for quick and easy). After clicking “ok”, be prepared for something like the following.

After just a few seconds on a busy server, one should expect thousands of events to fire in a very profiler type manner. I want to reiterate here, that this was the same session template that feeds one of the two available sessions in XE Profiler.

Once you have determined that you have enough data, you can easily stop the session by right clicking the session name and then selecting the “Stop Session” menu item.

And that is all. It is very easy to start a session in two different manners without XE Profiler. One of the options even allows you to create the exact same session as XE profiler but without that feature even being made available on the version of SSMS being used. So if you really want that profiler type of session, you can use a template to quickly create it even if your SSMS version pre-dates 17.3 and even if you don’t have the profiler templates by default (remember you can simply add the profiler templates should you choose).

In the end, I have to call this one a piece of fiction as well. The XE Profiler does not add any new ability to quickly start a session that wasn’t already a viable option. That said, the XE Profiler does add a third alternative to start a session – so long as it is one of the two default sessions.

Result: Fiction

SQL 2012 Required

This is one of those areas that I wish were fiction. However, this has actually been a consistent theme ever since the GUI was released for XE. The GUI components only work on 2012 instances and later. Let’s take a look at a connection to a 2014 Server and a 2008R2 Server in the same SSMS window from SSMS 17.0.

We can see here that the standard Extended Events GUI is just not available for the 2008R2 instance. The 2014 instance is denoted with a Green box and arrow to help distinguish between that and the red marks for the 2008R2 instance.

Now, let’s take a look at it with SSMS 17.3. With a similar setup and similar notation, we see the following in SSMS 17.3 when connected to both a 2014 instance and a 2008R2 instance.

Once again, I will admit this is something I really wish were not the case. It would definitely be helpful if the XE GUI were to be made available to SQL Server Instances that are 2008 and 2008R2. Especially given that the tool is an “SSMS feature”. That said, it really behooves everybody to get their SQL Servers updated to current technologies (2008 is nearly 10 years old afterall).

Result: Fact

The default “Standard” session displays all Extended Events

So far, we have been easing ourselves into this XE Profiler very gently. It is time to finally get into the deeper end and actually look at some very specific XE Profiler things. When I first heard this one, I was admittedly flabbergasted. One session displays all extended events? Really? Let’s actually establish some baselines about the default sessions for XE Profiler first.

  1. There are only two default sessions for XE Profiler
  2. The session names are “Standard” and “TSQL”
  3. The default sessions are based on templates (discussed previously)
  4. The template for the “Standard” session is the xe_Profiler_Standard xml template file
  5. The template for the “TSQL” session is xe_Profiler_TSQL

If I peruse the template file located in the directory on disk (C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Templates\sql\xevent) for the standard session, I will be able to see that it contains the following events:

Now, that is not the entire template file. I shorted it to only include the events and none of the other details like session configuration or the actions attached to each event. As you can see, there is a grand total of seven events in this particular session.

If I run the following statement against a SQL Server 2014 instance, I will get a few more events than just seven.

And the results:

And just for the sake of clarity, the results on a 2017 instance:

Obviously the math does not add up. There are multiple things wrong with the notion that a session from XE Profiler will contain all events. First, there is no sense at all in adding every event to a single event session. The second problem is that the session is supposed to mimic a profiler default template session. If it is supposed to mimic a profiler style session, you can only use 180 events max anyway. Profiler only had 180 events and XE is obviously way beyond that with so much more feature support than Profiler could even muster back in its glory days.

In case, you were wondering, here is how you access the XE Profiler. It is not under the Extended Events Node, it is a brand new node.

As you can see, there really are only the two options for XE Profiler and if you recall, both of those options are derived from the templates that were made available in the traditional XE GUI and are very easy to create a quick session via the same templates as these two options.

Whether using the XE Profiler or the previous method to create these sessions, there is just no way that these sessions could contain every event in Extended Events. It’s just a bunch of hot air with no foundation to the claim.

Result: Fiction

XE Profiler adds ability to start/stop session

Let’s start back with the last image from the previous section and then go from there. We need to access what is available to us via the XE Profiler node so we can test this theory. I will start by right clicking one of the sesisons (right clicking on the “XE Profiler object itself will not yield anything very promising fwiw).

As you can see here, we are given some pretty basic options to either “Stop Session” or to “Launch Session”. It also seems very obvious what they want you to do here because the option is bolded – “Launch Session”.

This menu is the same regardless of the session state. What I mean by that is even if you do not have the session created, this menu is the same as if you have the session created and running. None of the options dim and all are clickable on this menu. This holds true for SSMS 17.4 as well. Let’s take a look at a server where none of these “profiler” sessions are running.

 

I want to make it abundantly clear. This is the way it appears after a completely fresh install of SSMS 17.3. None of these sessions have ever run on this particular instance of SQL Server 2017 and this is even a completely fresh install of SQL Server 2017. That said, I have done nothing yet to break it – buahaha.

Knowing that no sessions have ever been “launched” for XE Profiler, it is clear that it is time to go ahead and “launch” a session. In this new feature “launch” is the new equivalent of “start”. Once you launch a session you will get the same net effect as what we saw earlier when I created a session from the profiler template and checked the box to watch live data and the box to start the session. When I click launch, I will see something like the following:

A new XE session was created under the Extended Events node and it is now called “QuickSessionStandard”. Once again, this is created from that profiler template I have mentioned a few times.

And then after a bit, I may want to try and stop the session.

You can see here that I was able to capture some quick data into the live stream view (which was a feature that was already there prior to 17.3). And after clicking the “stop session” menu item, I will see the following.

Note that the “QuickSessionStandard” event session now has a “Stop” icon instead of a “play” icon. So yay – I was able to start and stop a session quickly from XE Profiler. In addition, XE profiler launched a live stream view just like we saw earlier when creating the same session from the template in the XE GUI. Now for a pro-tip – start and stop an XE Session from the old GUI – “right click any event session and select start/stop from the context menu.”

Yes it is that easy. In addition, only the available option for that session will be enabled (e.g. you cannot stop a session that is already stopped – or not running). In fact you can even start/stop from TSQL. If you try to stop a session that is not running in TSQL, you will get a warning message that it is already stopped. Far more intuitive than XE Profiler. You will get no indication that the session is unable to be stopped if it is already stopped. Let’s take a look at the old way.

And after the session is started, I can also quickly stop it.

You can also see from each of those images, that I have the option to watch live data for the session. Another easy access feature from the old XE GUI.

While XE Profiler has added an additional means to be able to start/stop sessions quickly, the fact of the matter is that it is not a new feature brought to the table by XE Profiler. All it did was add a different way of doing it. I will also add that this new method can actually be somewhat confusing as well. You can only determine (visually) if the session is running or stopped by expanding the “Extended Events” node. Stopping a session from the XE Profiler node does not close the livestream viewer so you could easily presume it is still running. The XE Profiler node gives you no insight into the actual state of the session.

And for giggles, this is how easy it is to start or stop a session via TSQL.

If I run the statement to stop the session when it is already stopped, I will get the following:

Msg 25704, Level 16, State 1, Line 9
The event session has already been stopped.

I have previously covered this specific topic (start/stop sessions) in my 60 Day series which can be found here.

Result: Fiction

Customizable XE Profiler Sessions

Let’s take a gander at the previous section where I showed the context menu available in XE Profiler and not the extent of that menu. The profiler iteself does not offer any sort of customization. You have only two default sessions. You cannot change either session within that particular feature.

Since I cannot customize anything for these sessions via the XE Profiler, let’s use the XE GUI that is tried and true. I will effectively perform the following via the XE Session Properties window:

And after I am done, that session will look like the following:

And visually, from the GUI, it will look like this:

You can see that the aforementioned events are definitely gone from this session. From here, I can absolutely start that session from the XE GUI and it will run just fine. I can hit “watch live data” from the context menu and it will open up the previous view of the live data that was used (so the exact same configuration of that window because the XE GUI has always done that). If I double check my session after it is started, I will be able to confirm that the altered session is indeed still tact just the same as I customized it.

Let’s go ahead and stop that and then “launch” it from the XE Profiler now. After the “launch” I will re-script the session to verify that it is either the same or altered from what I had configured. In this case, since the session was already in place, I will find that the XE Profiler did not alter my session in any way and merely started the session. That is a win for XE Profiler in that it was able to determine that the session did indeed exist. However, I have to use the XE GUI and not XE Profiler in order to get a decent config on those default sessions.

Result: Fiction

Bonus time

XE Profiler only has SQL Profiler Events

This is actually an interesting statement. While the XE Profiler relies on templates based on SQL Profiler and the events in those templates are events that are available in SQL Profiler, they are not exclusive to SQL Profiler. These events can be used in Extended Events and have been a part of the XE Engine since inception. So, I would call this one fiction as well. I think an obvious litmus test for that is that the events are actually running through an XE Session so they must obviously be XE events as well.

Result: Fiction

Up to this point I have not addressed any changes in the Extended Events Profiler that were made for SSMS 17.4. Here is where it seems we compound a bit of confusion for the masses. Not only do we have this Extended Events tool that we have decided to call “Profiler” which has already had many people asking me about it because the name confused them (is it profiler or is it XE??), but it seems that the identity crisis for this feature is exacerbated slightly with a sudden name change.

Using yet another very clean install of SSMS on a different instance of SQL Server to ensure that none of the XE Profiler components had been used or any of the default sessions created and launched, I have the following:

Yes, the name change is very innocuous but it can lead to some confusion for various parties involved. If a set of step by step instructions says to do something and the learner cannot find that explicit node, they will become confused.

For anybody that has been working with XE for some time, they will know that XE is synonymous with XEvent and will quickly figure it out.

Conclusion

So, in wrapping up this very long article, I want to recap the score of the Fact v. Fiction items. The final tally is Fact 1 and Fiction 9 (9 original list items plus 1 bonus entry). This means that there is a whole lot of stuff going on around out there about this new feature that is just not accurate. While some may be able to derive some small use from the XE Profiler, it really does not add anything that you could not already do with either TSQL or the old XE GUI.

Some say this is a way of bridging the gap. In my opinion, that gap was already bridged with the GUI that has been available for several years. Some say that maybe this tool needs to integrate a way to shred XML faster. To that, I say there are methods already available for that such as Powershell, the live data viewer, the Target Data viewer, or even my tools I have provided in the 60 day series.

I would challenge those that are still unfamiliar with the XE GUI (out for nearly 6 years now) to go and read some of my articles or articles by Jonathan Kehayias about the power that is in XE as well as some of the power in the GUI.

SQL Server Haunt 2017

Comments: No Comments
Published on: November 1, 2017

Halloween is a great time of year. It is unfortunate that it is just one day of the year. That said, I do like to think of the phantasmripmonth of October as Halloween Month. I have several posts over the years that geek out over the cross-over between Halloween and SQL Server.

The undead of Halloween are now upon us. Among the hordes of data zombies roaming the streets in lab-coats and fishnet stockings, few of us are still scrambling to remove the hexes we have looming over our data.

As chance would have it, these hexidecimals, err hexes, have a more profound effect on us than we first thought. Many may have yet to even recognize the impact of the hexes placed along with the monsters that now lurk in the data after having been summoned via those hexes.

DB and Fun Related

Seeing as I am a really big fan of this holiday I have a few Halloween posts over the years. If you are interested in the previous Halloween posts, here is a list of a few of them:

All Halloween posts

That list is my Halloween treat this year. Now for a bit of a trick with a very strong warning. Because of this warning, I am not posting any code showing how to perform the trick.

Warning

The contents of this post are for the intent of HUMOR!

Freddy Kreuger

mangled

I can hear you clamoring from half a world away right now – “There is nothing called ‘Freddy Kreuger’ in SQL Server. What in the world are you talking about?”

You would be very accurate in your exclamation there. But this is not necessarily a strict exercise in feature names within SQL Server. I want you to think a little further outside the norms for a while.

Do you currently or have you ever needed to shred XML? XML shredding via TSQL can be a monstrously bloody killer to your database performance. As it turns out, Mr. Kreuger was also a monstrously bloody shredder.

Jason Voorheese

Yet another beast that is not truly in SQL Server, or is it? A not so new but new feature in SQL Server is called JSON. This feature does actually perform better than XML in some regards. That said, we do have a very common problem between the two of these features – blobs.

If you are not familiar with what that means -here you go. A blob is an overly large item being stored in the database. If you wish, you could correlate that to the other well known Halloween beast – “The Blob”.

Over time, this blob acts like sludge and just slows down your database queries. In addition, like the creature, the blob in your database tends to continue to grow in size and is seemingly never able to be put in check.

Skeletons

When I find skeletons, I have to be honest, I don’t find them terribly frightening. When talking about skeletons in your database, I am even less frightened.

Then again, when I run into the situation as described recently, in this post, I may get a bit of a startle and get just a wee bit concerned.

Overall though, I am rarely startled or frightened by any skeletons in the database. These are really just the supporting structures of a nice secure database and are called “schemas”. See, not really all that frightening here if we think about it just a bit.

This next one however, might be a little harder and should be nearly enough to cause some heart pain.

Warlocks

Surely there are no wizarding type of people in the database, right? Warlocks? I know for absolute certainty that there is no such feature or anything remotely close to a warlock, witch or wizard within the database. That is unless my database is about mystical creatures and people.

Alas, I urge you again to expand the box of perception a little bit and become just a tiny bit imaginative. This one, truth be told, does require a fair amount of explanation and imagination though.

The problem comes in part from some magical data issues that can occur due to this particular feature. In addition, this also comes from the wonderful grammatical errors from various blog posts and forums out there mis-spelling “which” as “witch”. Since “manwich” is really close to “man-witch”, I am calling it a warlock.

Now, since I am calling it a warlock, that leads us to the next strong hint about the feature. “Lock” in this case is the key. Now which magical, imaginative feature might there be that is related to “lock”? That would be the “nolock” directive and all of the data quality issues that it presents. Here is a really really good recap (by Aaron Bertrand) on this feature along with reference to it being “magic” – at this site.

Pirates

This is probably the easiest of the day by far. For all the data loving geeks out there, SQL Server has this pirate flavored way for you to get your drool on. This feature is called “R”. Yup – just like what a pirate says matey.

R is a tool to be used by data scientists or data geeks in general to try and throw together many different flavors of statistical analysis about your data.

Split Brain

Finally, (at least for this Halloween) we have this condition that is real within SQL Server. While treatable with long hours and heavy medication, it is something to be feared.

This condition is something rare but it is very real. The split brain syndrome is pretty much a multiple identity personality disorder in your database. If you have multiple nodes in a cluster, mirror or availability group, it is possible for more than one of those nodes to believe it is the master node and then for different transactions to become hardened in each of those nodes.

When this happens, you will not be able to use bleach to clean up the mess. Instead, you will be required to spend a grundle of time with your database cuddling it and nursing it back to data consistency and good mental health.

Last but not least, HAPPY HALLOWEEN!

Endpoint Owners – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

While the challenge may have been for the year 2016 and I haven’t contributed in several months, it is still a worthwhile effort. In that vain, I am adding this article to that series.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Endpoints

You may have heard the term endpoints thrown around in technical discussion and wondered “what the heck is an endpoint?” Well, that is a good question. An endpoint in the simplest form is a connection or point of entry into SQL server.

Another way of thinking about an endpoint is to look at the ends of a line. Generally speaking, there is a stop point at each end of the line. At this stopping point, the line may connect to something else and therefore be a point of entry into that “something else”.

When we deal with SQL Server there is a handful of default endpoints and then a handful of other types of endpoints. To figure out what the default endpoints are, it is pretty easy. The following query will expose the default endpoints.

Executing that query will produce results similar to the following:

Some of those might look pretty straight forward and then you get to that VIA endpoint. Don’t worry about that endpoint. The VIA endpoint is on the deprecation list. All of these default endpoints will be owned by sa and don’t require you to do anything with them per se other than to understand they exist. You may have noticed that I filtered my results by looking only at endpoints with an id of less than 65536. Any endpoint id lower than this number is a default endpoint. All others are user defined endpoints.

Endpoint Owners

So far so good. This is pretty straight forward to this point. If you have implemented anything like mirroring, Availability Groups, or Service Broker, then you may be interested to know that you have also created additional endpoints in the Instance. When you create an endpoint, did you know that you become the owner of that endpoint by default? It is very similar to when you restore a database or create a database – the default owner of that database will be the person that restored/created it.

Do you know who owns your endpoints? Did you create all of the endpoints? Do you know if you have any additional endpoints beyond the default endpoints? If you cannot answer yes to all of these questions, then you will probably want to figure out what the endpoints are and who owns those endpoints. Let’s try that with a slight modification to the previous query.

This will yield results similar to the following:

I took the script a step further than necessary. I wanted to illustrate potential endpoint types that may not be in use. This script covers both in use endpoint types and those not used. In my results you may note that I have a DATABASE_MIRRORING endpoint. As reality would actually have it, it is an Availability Group endpoint but those are presented as DATABASE_MIRRORING endpoints.

Note that my mirroring endpoint (aka Hadr_endpoint) is owned by “YourDomain\DBAdmin”. What if the owner of that particular endpoint was no longer present in the organization and I wanted to change it to something more sustainable? Well, I could do the following:

In this case, the default endpoints are owned by sa and it does make enough sense to assign the owner to be sa for the mirroring endpoint. Take notice that the name of the endpoint is required in order to reassign the owner. The name of the endpoint follows the :: in the script. So, whatever your endpoint name happens to be, just place “Hadr_endpoint” that follows the :: in my script.

Recap

Endpoints are a fundamental piece of the puzzle with SQL Server.  Getting to know your endpoints and the owners of those endpoints is an essential component of knowing your environment. Who knows, it may come to pass that the owner of an endpoint may no longer exist in your environment or possibly lose permissions along the way. Knowing who owns the endpoint may just save three or four grey hairs when that day comes.

SSRS Subscription Schedules – Enhanced

Reporting Services

 

Over the past couple of articles I have illustrated some of the fun that can be had when dealing with the scheduling capabilities of Reporting Services (SSRS). The first article covered how to create more advanced schedules (from the SSRS point of view). In another I article, I showed how to retrieve scheduling information from the ReportServer database. In that last article, I also promised a follow-up article for more in-depth scheduling details.

SSRS provides the capability to review the scheduled reports (subscriptions) in a far moare detailed fashion than shown in that previous article. That ability is held within the ReportServer database. This article will dive into the source of this scheduling information within the ReportServer database.

This dive will be a bit more detailed than the first time I dove into SSRS scheduling – here. That particular dive was missing an important set of data.

Deeper Dive

My first dive into building a report of the report schedules was pretty comprehensive and I used it quite frequently. Many others also used it regularly for their environments as well. So the first attempt wasn’t bad by any stretch. The main problem (at least for now) is that the script does not account for any of the custom schedules that can be built. I have to be honest in that I hadn’t really considered that feasibility. Times and experience change that perspective. When that perspective changes, it is time to dive back in and add coverage for the shortcoming in the script.

When I dove back in to fetch the custom scheduling information, I realized there was a discrepancy even in the old report in that I was not gathering Job information pertinent to the schedule. Recall that SSRS subscriptions are performed via the SQL Agent. With that in mind, it is reasonable that the Agent job information is pertinent and germane to the report subscription and probably should be included in a report. Couple that with the methods on creating custom schedules for the SSRS reports, and we have a resounding need to ensure that data is trapped properly.

Due to this epiphany, I have now a more complete script to include both the data from SQL Agent as well as the data from the ReportServer database in regards to subscriptions and schedules of reports.

Script

In pulling the data together from the two sources, I opted to return two result sets. Not just two disparate result sets, but rather two result sets that each pertained to both the agent job information as well as the ReportServer scheduling data. For instance, I took all of the subscriptions in the ReportServer and joined that data to the job system to glean information from there into one result set. And I did the reverse as well. You will see when looking at the query and data. One of the reasons for doing it this way was to make this easier to assimilate into an SSRS style report.

As you can see, it is not a short script. By fair measure, it is also considerably more complex than the XML version that was recently posted (and mentioned earlier in this article). That said, it is many times more flexible and complete than the XML version as well. I do continue to use the bit math for figuring the schedules as I did in the first version of the script. When done this way, I can handle the custom schedules as well as get extensive details about the schedule from both the msdb and ReportServer databases.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. This version of the report will help you retrieve the data from both a job perspective and from the report scheduler perspective. Through this series of articles, you should be confident in being able to now create custom schedules as well as accurately report on any reports that have specific subscriptions/schedules.

SSRS Subscription Schedules

Reporting Services

Reporting Services (SSRS) is a tool that permits you to create and deliver feature rich reports. The reports can be delivered in various formats and can even be scheduled to be delivered at various times. I even recently wrote about creating more advanced custom schedules.

With the ability to create content delivery schedules, or to subscribe to report content delivery, this imposes a requirement to also know when the various reports are scheduled to be delivered.

SSRS provides the means to be able to review the scheduled reports (subscriptions). That means is held within the ReportServer database. This article will help to uncover one of the sources of this scheduling information within the ReportServer database.

Review Schedules

When looking into the database for SSRS, I can see there are different means to be able to review the report schedules. This article is going to cover just one of those methods. And if I am going to be entirely up front about this method, I don’t like it and I recommend that it not be use.

I can hear the moans now. “If you don’t like it, then why show it to us?” Well, that is a very good question and there is a very good reason for this decision. A lesson I learned a long time ago is sometimes you need to learn the hard way, or less desirable way, to do various things. One of my favorite Calculus teachers from years ago drilled this into my head over and over again. Why? Well, there are three good reasons that come to mind: a) it makes the more desirable method seem much easier, b) it helps you to appreciate the more desirable method all that much more, and c) because if all else fails, you will have another method to fall back to just in case.

Less Disérables

The least desirable method (at least of the methods I will share) is to parse XML from a field stored in the ReportServer database. If I look into the Schedule table within the RepotServer database, I will find this column called MatchData. Up front, this field is not very intuitively named. I would not think this field actually represented the schedule, but it actually does.

Before we start diving into parsing XML, we need an example of what this XML may look like. The following will provide that very example that we need.

Are your gears grinding yet? The XML is not terribly difficult to follow. I am sure you have realized the problem from this format at this point. If I query this to make it human readable in a tabular format (you know DBA format), I will end up with a really wide table that is pretty ugly to look at (unless I get super creative to combine fields etc).

Let’s take a look at the query to parse something like the preceding XML example.

And there we have that ugly query to produce a really wide ugly result set. The query is not difficult to write. It’s just extremely repetitive. In similar fashion, the results are very repetitive. This makes, in my eyes, this particular method less desirable.

I haven’t even gotten to the part about the shortcoming in scheduling reports through SSRS that I wrote about recently – here. In that article I discussed a workaround to overcome the SSRS scheduling options. If you employ methods such as I discussed there, then this query will never fully cover the scheduling related to your reports. Because of that, I will be discussing the better solution in the next article.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. While parsing the XML is less desirable than what I will be sharing in the near future, it is better than doing nothing at all. I recommend you start looking into the various report schedules you may already have in your environment. Also, stay tuned for the next article that will better show these schedules.

SSRS Custom Shared Schedule

Reporting Services

 

Reporting Services is a pretty feature rich tool for delivering reports to various consumers. There is plenty of power within Reporting Services (SSRS) giving one the ability to perform visualizations, render reports in various formats and even schedule reports to be delivered in different formats or on different schedules.

Unfortunately, the scheduling capability within SSRS is fairly weak. While it is true that one can accomplish a varied array of different schedules, the scheduling of SSRS is far below the power of SQL Agent (for instance).

I will explore the deficiency of the scheduling tool within SSRS in a very specific case. You may even be familiar with this deficiency already. Many organizations have the need to produce end of month reports that need to run on the last day of the month. If you are familiar with the scheduling tool within SSRS, you already know that this is not possible (at least through SSRS 2014). This article will help step you through how to schedule a report subscription to run on the last day of the month.

Schedule Options

Before diving into the custom schedules, let’s take a closer look at the options available for scheduling through SSRS.

While the tool does provide various options, the granularity certainly is not diverse enough to cover many legitimate scheduling needs – especially the “last day of month” requirement for many month end type of reports.

As you may be aware, SSRS subscriptions are actually run through the SQL Agent despite being set through SSRS. If I take a look at some of the scheduling options in SQL Agent, I can see the following.

Note here that there are various built-in options to schedule on a monthly basis, including the “last” option. If I wanted to look even closer at the available options I would see the following.

As you can see here, I can schedule for multiple different types of “last” options relative to the month. One of these options happens to be the last day. This helps to illustrate just how much more powerful and versatile the scheduling within SQL Agent is than what we get with the SSRS scheduler. All of this despite the fact that SSRS subscriptions are actually executed by the SQL Agent. Doesn’t that seem a bit odd?

Custom Schedule

Now that we better understand the limitations of the SSRS scheduler and given that SSRS subscriptions are executed through the SQL Agent, let’s move on to bigger, better and much more useful means of scheduling the SSRS reports.

The very first thing that you should do is to create a share schedule. This should be a shared schedule that is created as a run-once schedule. Let the schedule run that one time and then proceed on to the following steps. If you need help in creating a shared schedule, here is an msdn article. When you create the shared schedule, I recommend using a descriptive name that you can remember. This name will be useful in the next step. For the purposes of this article, my schedule is named “EndOfMonth”.

Once the schedule is created, the next thing to do is to query the ReportServer database. Make sure you know the name of your database. Some people have changed the ReportServer database name from the default. This is an important piece of information to remember. The query against the ReportServer database will be predominantly just to get the schedule id of the newly created schedule.

When I run that query, I receive the following results.

I now want to take that scheduleid and then use it to determine what SQL Agent job is actually related to that schedule so I can fetch some info from the job. I could skip this entirely and go to the subsequent step but this helps to understand what needs to be done in that subsequent step. So, from here let’s query the msdb database in SQL Server to fetch some info from the job system.

The scheduleid is used within a command within a jobstep. By passing the scheduleid into this query and then comparing against the existing job steps, I am able to retrieve the job that is related to the SSRS shared schedule (subscription). When I run the preceding query, I receive results illustrated in the following image.

Take note of the items I highlighted in red. Within the job name, step name and command text for the step, I can find the scheduleid for that shared schedule that I created. Yes, I could easily have changed the query I used to compare to the job name and that would have worked just fine in this case. By querying the command, I can confirm that the schedule is actually being used. The most important piece of information in this result is the entire command text for the job. I will need to take this command text and use it to populate a brand new SQL Agent job. This is how I will get my custom schedule for the SSRS subscription.

From here, I just need to create a SQL Agent job that uses the options for a monthly schedule indicating last day from the two drop down menus illustrated previously in this article. Then all that is left is a sigh of relief and a boom bada bing.

Recap

SSRS does not have the built-in capability for some of the more complex and often times regularly required report schedules to meet various business requirements. By following the steps outlined in this article, you can circumvent that short-coming and achieve the needed business requirements while looking like a hero to those that need the more advanced report schedules.

«page 1 of 26

Calendar
January 2018
M T W T F S S
« Dec    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Friday, January 19, 2018