An Identity Crisis: Is it Profiler or is it Extended Events?

I have been working on this article for far longer than one might think is necessary for something of this nature. Truth be told, I was reluctant because there are so many people that quickly jumped on the wagon to write about this feature as soon as it was released. With some coercing, here the article is finally!

What is this newfangled feature? Well, that is kind of the problem. The name has already changed in the second release – be it ever so slightly. The feature is XE Profiler or XEvents Profiler and in theory it is supposed to give us something new in management studio.

Fact or Fiction

This Extended Events Profiler is a brand spanking new feature (ok, so the bouncing baby is a few months old now and no longer a newborn). Despite it being new, it seems there are a handful of “facts” already published about it that may or may not be accurate. Due to that, I want to play a little game of fact or fiction with it in this article.

Here is the short list from which we will work:

  1. Sessions (XE Profiler Sessions) are Customizable
  2. The default standard session displays all Extended Events
  3. It works with instances of SQL 2012 or greater only
  4. XE Profiler Adds ability to quickly start/stop sessions
  5. XE Profiler provides a new live view of the Event Session
  6. XE Profiler provides a GUI for Extended Events
  7. Templates were not available in XE until the release of XE Profiler
  8. Similarly, XE Profiler also brought the Profiler templates with it as part of the feature release
  9. A DBA can now do something that was not previously possible to do (quickly start a session)

That is a rather wealthy list of “facts” to be checked for validity if you ask me. Let’s go ahead and start diving into each of these. I will not be following the provided list order. Rather, I will be looking at items that seem appropriate to validate in my own special order. Rest assured tho, we will get through the list.

GUI

I don’t know how this is still a misconception about Extended Events, but it is certainly still out there. Suddenly XE Profiler comes along and people suddenly think that this tool has provided the GUI that everybody has been clamoring about since Extended Events was released in 2008 (yes almost 10 yrs ago).

I am not going to waste much time on this because it is a complete and total piece of fiction. The GUI has been readily and easily available since SQL Server 2012. In addition to that, there was a GUI that could have been installed for the older versions that was created by Jonathan Kehayias.

From SSMS 2014, here is a sample screenshot of the GUI wizard followed by a standard GUI screen.

As you can see, there is clearly a GUI prior to the release of SSMS 17.3.

Result: Fiction

Templates

This is probably the next easiest notion to prove or disprove. I have heard from more than a single source that templates are brand new with SSMS 17.3 (which is when XE Profiler was released).

I don’t really need to go very far or work very hard to prove this notion as a complete work of fiction. I have SSMS 2014, 2016, 17.0, 17.3, and 17.4 readily available to me. I am going to start with SSMS 2014.

If I  once again open the GUI for XEvents in SSMS 2014, I can easily see the available templates in that version.

It looks pretty obvious there that templates were introduced well before SSMS 17.3.

I can also see all of the templates available by browsing out to the file system.

And for good measure, here is one from a 2012 instance.

The same templates (minus possible changes within the template definitions) are available in SSMS 2012, 2014 and 2016. We will see those same templates available in SSMS 17 and above as well – in addition to a few that have been added.

I want to also add here that these are just the default templates. Just like in Profiler, you can create and save your own templates for future use.

Result: Fiction

Profiler Templates

We just saw how there are templates available prior to the release of SSMS 17.3. We can also see that the profiler templates are not available in those prior versions of SSMS. So does that also mean that the profiler templates were only released as a part of SSMS 17.3? We best be careful with that slippery slope of assumption. Let’s take a closer look at SSMS 17.0.

And just so we can confirm that this is indeed a version of SSMS that pre-dates 17.3…

As you can see there, this is RC3 of SSMS 17.0 so it is definitely a version that precedes 17.3

And similarly from a peek inside the folder on disk:

Seems to me we have ample evidence that the profiler templates were made available to the XE GUI well in advance of SSMS 17.3

Result: Fiction

Event Session Live View

One of the most widely used feature of profiler is the ability to watch the data as it streams through the session. According to many, this ability is missing in Extended Events and the XE Profiler feature has brought this ability to the forefront with XE now.

Let’s start by taking a look at an XE Session in SSMS 2014 connected to a 2014 database instance to check the validity of this concern.

All I need to do is browse the nodes in SSMS down to the session I want to view. Right click the session and then select the option to “Watch Live Data”.

After clicking watch live data, the event session live stream will appear on the right and start streaming the data to your SSMS workspace as shown in the following image.

For this session demo, I selected a session that should be readily available to everybody – the system_health default session. That session comes installed and running for you already.

Don’t like the layout of the screen? Well, you can customize the view to your liking and it can be customized differently for each and every different running session on your server.

As it turns out, the ability to watch livestream data has been available to you for Extended Events since 2012 when the GUI was made available. If you are curious about learning more about the GUI and customizing the view, you can read various articles I have written in my 60 day series you can find here.

Result: Fiction

That brings us to the halfway point. To recap we are now sitting at a score of Fact 0 and Fiction 4. Let’s see if we can turn the tide in these last 5.

Quick Start a Session

To be quite frank here, this has been a long time argument against using Extended Events. It just isn’t possible to start a session quickly and easily. I have two methods that I have shown in my training sessions concerning that concern. Let’s take a look at the first. I will demonstrate this from SSMS 2014.

The first thing that needs to be done is browse the SSMS tree until a session is found that you want to start. Here is an ancient cheat code – I leave many sessions on the server in the stopped state until I need to use them. Pro-tip: create your XE sessions on all of your servers and leave them stopped until you think you need to use it.

For this example, I have selected my AG_LeaseTimeout sessions which is in the stopped state (denoted by the little red “stop” icon next to the session name).

Then right click the session name and select “Start Session” from the context menu. To help clarify, I have circled the session name in green and the menu item in blue. This an extremely easy method to start a session. Once started, and if you really feel you must watch data stream onto the screen, then you can also follow the steps shown for watching the live stream data in the previous section.

The second method to quick start a session is only a touch more difficult. The steps are just the same as shown in the “template” section. You select a template that matches your desires and give the session a name. Here are some broad strokes shown in SSMS 17.0 (same steps apply for SSMS 2014 minus the ability to select a profiler template):

Right click the “Session” node and selection “New Session” from the context menu.

Select the “Standard” template from the dropdown box. This is a profiler equivalent (and also happens to be one of the two default sessions in “XE Profiler” so makes sense for a good comparison. Note that the window has a warning that the Session name cannot be empty, so let’s give the session a name.

With that warning cleared, we can now move on to the next configuration. Notice the two red arrows in the preceding image. Enabling these two options will start the session immediately and also give you that profiler warm and fuzzy by allowing you to immediately start watching the session without needing to lift another finger.

From here, click the OK button ( I know I normally teach people that they should script it, but in this case we are going for quick and easy). After clicking “ok”, be prepared for something like the following.

After just a few seconds on a busy server, one should expect thousands of events to fire in a very profiler type manner. I want to reiterate here, that this was the same session template that feeds one of the two available sessions in XE Profiler.

Once you have determined that you have enough data, you can easily stop the session by right clicking the session name and then selecting the “Stop Session” menu item.

And that is all. It is very easy to start a session in two different manners without XE Profiler. One of the options even allows you to create the exact same session as XE profiler but without that feature even being made available on the version of SSMS being used. So if you really want that profiler type of session, you can use a template to quickly create it even if your SSMS version pre-dates 17.3 and even if you don’t have the profiler templates by default (remember you can simply add the profiler templates should you choose).

In the end, I have to call this one a piece of fiction as well. The XE Profiler does not add any new ability to quickly start a session that wasn’t already a viable option. That said, the XE Profiler does add a third alternative to start a session – so long as it is one of the two default sessions.

Result: Fiction

SQL 2012 Required

This is one of those areas that I wish were fiction. However, this has actually been a consistent theme ever since the GUI was released for XE. The GUI components only work on 2012 instances and later. Let’s take a look at a connection to a 2014 Server and a 2008R2 Server in the same SSMS window from SSMS 17.0.

We can see here that the standard Extended Events GUI is just not available for the 2008R2 instance. The 2014 instance is denoted with a Green box and arrow to help distinguish between that and the red marks for the 2008R2 instance.

Now, let’s take a look at it with SSMS 17.3. With a similar setup and similar notation, we see the following in SSMS 17.3 when connected to both a 2014 instance and a 2008R2 instance.

Once again, I will admit this is something I really wish were not the case. It would definitely be helpful if the XE GUI were to be made available to SQL Server Instances that are 2008 and 2008R2. Especially given that the tool is an “SSMS feature”. That said, it really behooves everybody to get their SQL Servers updated to current technologies (2008 is nearly 10 years old afterall).

Result: Fact

The default “Standard” session displays all Extended Events

So far, we have been easing ourselves into this XE Profiler very gently. It is time to finally get into the deeper end and actually look at some very specific XE Profiler things. When I first heard this one, I was admittedly flabbergasted. One session displays all extended events? Really? Let’s actually establish some baselines about the default sessions for XE Profiler first.

  1. There are only two default sessions for XE Profiler
  2. The session names are “Standard” and “TSQL”
  3. The default sessions are based on templates (discussed previously)
  4. The template for the “Standard” session is the xe_Profiler_Standard xml template file
  5. The template for the “TSQL” session is xe_Profiler_TSQL

If I peruse the template file located in the directory on disk (C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Templates\sql\xevent) for the standard session, I will be able to see that it contains the following events:

Now, that is not the entire template file. I shorted it to only include the events and none of the other details like session configuration or the actions attached to each event. As you can see, there is a grand total of seven events in this particular session.

If I run the following statement against a SQL Server 2014 instance, I will get a few more events than just seven.

And the results:

And just for the sake of clarity, the results on a 2017 instance:

Obviously the math does not add up. There are multiple things wrong with the notion that a session from XE Profiler will contain all events. First, there is no sense at all in adding every event to a single event session. The second problem is that the session is supposed to mimic a profiler default template session. If it is supposed to mimic a profiler style session, you can only use 180 events max anyway. Profiler only had 180 events and XE is obviously way beyond that with so much more feature support than Profiler could even muster back in its glory days.

In case, you were wondering, here is how you access the XE Profiler. It is not under the Extended Events Node, it is a brand new node.

As you can see, there really are only the two options for XE Profiler and if you recall, both of those options are derived from the templates that were made available in the traditional XE GUI and are very easy to create a quick session via the same templates as these two options.

Whether using the XE Profiler or the previous method to create these sessions, there is just no way that these sessions could contain every event in Extended Events. It’s just a bunch of hot air with no foundation to the claim.

Result: Fiction

XE Profiler adds ability to start/stop session

Let’s start back with the last image from the previous section and then go from there. We need to access what is available to us via the XE Profiler node so we can test this theory. I will start by right clicking one of the sesisons (right clicking on the “XE Profiler object itself will not yield anything very promising fwiw).

As you can see here, we are given some pretty basic options to either “Stop Session” or to “Launch Session”. It also seems very obvious what they want you to do here because the option is bolded – “Launch Session”.

This menu is the same regardless of the session state. What I mean by that is even if you do not have the session created, this menu is the same as if you have the session created and running. None of the options dim and all are clickable on this menu. This holds true for SSMS 17.4 as well. Let’s take a look at a server where none of these “profiler” sessions are running.

 

I want to make it abundantly clear. This is the way it appears after a completely fresh install of SSMS 17.3. None of these sessions have ever run on this particular instance of SQL Server 2017 and this is even a completely fresh install of SQL Server 2017. That said, I have done nothing yet to break it – buahaha.

Knowing that no sessions have ever been “launched” for XE Profiler, it is clear that it is time to go ahead and “launch” a session. In this new feature “launch” is the new equivalent of “start”. Once you launch a session you will get the same net effect as what we saw earlier when I created a session from the profiler template and checked the box to watch live data and the box to start the session. When I click launch, I will see something like the following:

A new XE session was created under the Extended Events node and it is now called “QuickSessionStandard”. Once again, this is created from that profiler template I have mentioned a few times.

And then after a bit, I may want to try and stop the session.

You can see here that I was able to capture some quick data into the live stream view (which was a feature that was already there prior to 17.3). And after clicking the “stop session” menu item, I will see the following.

Note that the “QuickSessionStandard” event session now has a “Stop” icon instead of a “play” icon. So yay – I was able to start and stop a session quickly from XE Profiler. In addition, XE profiler launched a live stream view just like we saw earlier when creating the same session from the template in the XE GUI. Now for a pro-tip – start and stop an XE Session from the old GUI – “right click any event session and select start/stop from the context menu.”

Yes it is that easy. In addition, only the available option for that session will be enabled (e.g. you cannot stop a session that is already stopped – or not running). In fact you can even start/stop from TSQL. If you try to stop a session that is not running in TSQL, you will get a warning message that it is already stopped. Far more intuitive than XE Profiler. You will get no indication that the session is unable to be stopped if it is already stopped. Let’s take a look at the old way.

And after the session is started, I can also quickly stop it.

You can also see from each of those images, that I have the option to watch live data for the session. Another easy access feature from the old XE GUI.

While XE Profiler has added an additional means to be able to start/stop sessions quickly, the fact of the matter is that it is not a new feature brought to the table by XE Profiler. All it did was add a different way of doing it. I will also add that this new method can actually be somewhat confusing as well. You can only determine (visually) if the session is running or stopped by expanding the “Extended Events” node. Stopping a session from the XE Profiler node does not close the livestream viewer so you could easily presume it is still running. The XE Profiler node gives you no insight into the actual state of the session.

And for giggles, this is how easy it is to start or stop a session via TSQL.

If I run the statement to stop the session when it is already stopped, I will get the following:

Msg 25704, Level 16, State 1, Line 9
The event session has already been stopped.

I have previously covered this specific topic (start/stop sessions) in my 60 Day series which can be found here.

Result: Fiction

Customizable XE Profiler Sessions

Let’s take a gander at the previous section where I showed the context menu available in XE Profiler and not the extent of that menu. The profiler iteself does not offer any sort of customization. You have only two default sessions. You cannot change either session within that particular feature.

Since I cannot customize anything for these sessions via the XE Profiler, let’s use the XE GUI that is tried and true. I will effectively perform the following via the XE Session Properties window:

And after I am done, that session will look like the following:

And visually, from the GUI, it will look like this:

You can see that the aforementioned events are definitely gone from this session. From here, I can absolutely start that session from the XE GUI and it will run just fine. I can hit “watch live data” from the context menu and it will open up the previous view of the live data that was used (so the exact same configuration of that window because the XE GUI has always done that). If I double check my session after it is started, I will be able to confirm that the altered session is indeed still tact just the same as I customized it.

Let’s go ahead and stop that and then “launch” it from the XE Profiler now. After the “launch” I will re-script the session to verify that it is either the same or altered from what I had configured. In this case, since the session was already in place, I will find that the XE Profiler did not alter my session in any way and merely started the session. That is a win for XE Profiler in that it was able to determine that the session did indeed exist. However, I have to use the XE GUI and not XE Profiler in order to get a decent config on those default sessions.

Result: Fiction

Bonus time

XE Profiler only has SQL Profiler Events

This is actually an interesting statement. While the XE Profiler relies on templates based on SQL Profiler and the events in those templates are events that are available in SQL Profiler, they are not exclusive to SQL Profiler. These events can be used in Extended Events and have been a part of the XE Engine since inception. So, I would call this one fiction as well. I think an obvious litmus test for that is that the events are actually running through an XE Session so they must obviously be XE events as well.

Result: Fiction

Up to this point I have not addressed any changes in the Extended Events Profiler that were made for SSMS 17.4. Here is where it seems we compound a bit of confusion for the masses. Not only do we have this Extended Events tool that we have decided to call “Profiler” which has already had many people asking me about it because the name confused them (is it profiler or is it XE??), but it seems that the identity crisis for this feature is exacerbated slightly with a sudden name change.

Using yet another very clean install of SSMS on a different instance of SQL Server to ensure that none of the XE Profiler components had been used or any of the default sessions created and launched, I have the following:

Yes, the name change is very innocuous but it can lead to some confusion for various parties involved. If a set of step by step instructions says to do something and the learner cannot find that explicit node, they will become confused.

For anybody that has been working with XE for some time, they will know that XE is synonymous with XEvent and will quickly figure it out.

Conclusion

So, in wrapping up this very long article, I want to recap the score of the Fact v. Fiction items. The final tally is Fact 1 and Fiction 9 (9 original list items plus 1 bonus entry). This means that there is a whole lot of stuff going on around out there about this new feature that is just not accurate. While some may be able to derive some small use from the XE Profiler, it really does not add anything that you could not already do with either TSQL or the old XE GUI.

Some say this is a way of bridging the gap. In my opinion, that gap was already bridged with the GUI that has been available for several years. Some say that maybe this tool needs to integrate a way to shred XML faster. To that, I say there are methods already available for that such as Powershell, the live data viewer, the Target Data viewer, or even my tools I have provided in the 60 day series.

I would challenge those that are still unfamiliar with the XE GUI (out for nearly 6 years now) to go and read some of my articles or articles by Jonathan Kehayias about the power that is in XE as well as some of the power in the GUI.

SQL Server Haunt 2017

Comments: No Comments
Published on: November 1, 2017

Halloween is a great time of year. It is unfortunate that it is just one day of the year. That said, I do like to think of the phantasmripmonth of October as Halloween Month. I have several posts over the years that geek out over the cross-over between Halloween and SQL Server.

The undead of Halloween are now upon us. Among the hordes of data zombies roaming the streets in lab-coats and fishnet stockings, few of us are still scrambling to remove the hexes we have looming over our data.

As chance would have it, these hexidecimals, err hexes, have a more profound effect on us than we first thought. Many may have yet to even recognize the impact of the hexes placed along with the monsters that now lurk in the data after having been summoned via those hexes.

DB and Fun Related

Seeing as I am a really big fan of this holiday I have a few Halloween posts over the years. If you are interested in the previous Halloween posts, here is a list of a few of them:

All Halloween posts

That list is my Halloween treat this year. Now for a bit of a trick with a very strong warning. Because of this warning, I am not posting any code showing how to perform the trick.

Warning

The contents of this post are for the intent of HUMOR!

Freddy Kreuger

mangled

I can hear you clamoring from half a world away right now – “There is nothing called ‘Freddy Kreuger’ in SQL Server. What in the world are you talking about?”

You would be very accurate in your exclamation there. But this is not necessarily a strict exercise in feature names within SQL Server. I want you to think a little further outside the norms for a while.

Do you currently or have you ever needed to shred XML? XML shredding via TSQL can be a monstrously bloody killer to your database performance. As it turns out, Mr. Kreuger was also a monstrously bloody shredder.

Jason Voorheese

Yet another beast that is not truly in SQL Server, or is it? A not so new but new feature in SQL Server is called JSON. This feature does actually perform better than XML in some regards. That said, we do have a very common problem between the two of these features – blobs.

If you are not familiar with what that means -here you go. A blob is an overly large item being stored in the database. If you wish, you could correlate that to the other well known Halloween beast – “The Blob”.

Over time, this blob acts like sludge and just slows down your database queries. In addition, like the creature, the blob in your database tends to continue to grow in size and is seemingly never able to be put in check.

Skeletons

When I find skeletons, I have to be honest, I don’t find them terribly frightening. When talking about skeletons in your database, I am even less frightened.

Then again, when I run into the situation as described recently, in this post, I may get a bit of a startle and get just a wee bit concerned.

Overall though, I am rarely startled or frightened by any skeletons in the database. These are really just the supporting structures of a nice secure database and are called “schemas”. See, not really all that frightening here if we think about it just a bit.

This next one however, might be a little harder and should be nearly enough to cause some heart pain.

Warlocks

Surely there are no wizarding type of people in the database, right? Warlocks? I know for absolute certainty that there is no such feature or anything remotely close to a warlock, witch or wizard within the database. That is unless my database is about mystical creatures and people.

Alas, I urge you again to expand the box of perception a little bit and become just a tiny bit imaginative. This one, truth be told, does require a fair amount of explanation and imagination though.

The problem comes in part from some magical data issues that can occur due to this particular feature. In addition, this also comes from the wonderful grammatical errors from various blog posts and forums out there mis-spelling “which” as “witch”. Since “manwich” is really close to “man-witch”, I am calling it a warlock.

Now, since I am calling it a warlock, that leads us to the next strong hint about the feature. “Lock” in this case is the key. Now which magical, imaginative feature might there be that is related to “lock”? That would be the “nolock” directive and all of the data quality issues that it presents. Here is a really really good recap (by Aaron Bertrand) on this feature along with reference to it being “magic” – at this site.

Pirates

This is probably the easiest of the day by far. For all the data loving geeks out there, SQL Server has this pirate flavored way for you to get your drool on. This feature is called “R”. Yup – just like what a pirate says matey.

R is a tool to be used by data scientists or data geeks in general to try and throw together many different flavors of statistical analysis about your data.

Split Brain

Finally, (at least for this Halloween) we have this condition that is real within SQL Server. While treatable with long hours and heavy medication, it is something to be feared.

This condition is something rare but it is very real. The split brain syndrome is pretty much a multiple identity personality disorder in your database. If you have multiple nodes in a cluster, mirror or availability group, it is possible for more than one of those nodes to believe it is the master node and then for different transactions to become hardened in each of those nodes.

When this happens, you will not be able to use bleach to clean up the mess. Instead, you will be required to spend a grundle of time with your database cuddling it and nursing it back to data consistency and good mental health.

Last but not least, HAPPY HALLOWEEN!

Endpoint Owners – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

While the challenge may have been for the year 2016 and I haven’t contributed in several months, it is still a worthwhile effort. In that vain, I am adding this article to that series.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Endpoints

You may have heard the term endpoints thrown around in technical discussion and wondered “what the heck is an endpoint?” Well, that is a good question. An endpoint in the simplest form is a connection or point of entry into SQL server.

Another way of thinking about an endpoint is to look at the ends of a line. Generally speaking, there is a stop point at each end of the line. At this stopping point, the line may connect to something else and therefore be a point of entry into that “something else”.

When we deal with SQL Server there is a handful of default endpoints and then a handful of other types of endpoints. To figure out what the default endpoints are, it is pretty easy. The following query will expose the default endpoints.

Executing that query will produce results similar to the following:

Some of those might look pretty straight forward and then you get to that VIA endpoint. Don’t worry about that endpoint. The VIA endpoint is on the deprecation list. All of these default endpoints will be owned by sa and don’t require you to do anything with them per se other than to understand they exist. You may have noticed that I filtered my results by looking only at endpoints with an id of less than 65536. Any endpoint id lower than this number is a default endpoint. All others are user defined endpoints.

Endpoint Owners

So far so good. This is pretty straight forward to this point. If you have implemented anything like mirroring, Availability Groups, or Service Broker, then you may be interested to know that you have also created additional endpoints in the Instance. When you create an endpoint, did you know that you become the owner of that endpoint by default? It is very similar to when you restore a database or create a database – the default owner of that database will be the person that restored/created it.

Do you know who owns your endpoints? Did you create all of the endpoints? Do you know if you have any additional endpoints beyond the default endpoints? If you cannot answer yes to all of these questions, then you will probably want to figure out what the endpoints are and who owns those endpoints. Let’s try that with a slight modification to the previous query.

This will yield results similar to the following:

I took the script a step further than necessary. I wanted to illustrate potential endpoint types that may not be in use. This script covers both in use endpoint types and those not used. In my results you may note that I have a DATABASE_MIRRORING endpoint. As reality would actually have it, it is an Availability Group endpoint but those are presented as DATABASE_MIRRORING endpoints.

Note that my mirroring endpoint (aka Hadr_endpoint) is owned by “YourDomain\DBAdmin”. What if the owner of that particular endpoint was no longer present in the organization and I wanted to change it to something more sustainable? Well, I could do the following:

In this case, the default endpoints are owned by sa and it does make enough sense to assign the owner to be sa for the mirroring endpoint. Take notice that the name of the endpoint is required in order to reassign the owner. The name of the endpoint follows the :: in the script. So, whatever your endpoint name happens to be, just place “Hadr_endpoint” that follows the :: in my script.

Recap

Endpoints are a fundamental piece of the puzzle with SQL Server.  Getting to know your endpoints and the owners of those endpoints is an essential component of knowing your environment. Who knows, it may come to pass that the owner of an endpoint may no longer exist in your environment or possibly lose permissions along the way. Knowing who owns the endpoint may just save three or four grey hairs when that day comes.

SSRS Subscription Schedules – Enhanced

Reporting Services

 

Over the past couple of articles I have illustrated some of the fun that can be had when dealing with the scheduling capabilities of Reporting Services (SSRS). The first article covered how to create more advanced schedules (from the SSRS point of view). In another I article, I showed how to retrieve scheduling information from the ReportServer database. In that last article, I also promised a follow-up article for more in-depth scheduling details.

SSRS provides the capability to review the scheduled reports (subscriptions) in a far moare detailed fashion than shown in that previous article. That ability is held within the ReportServer database. This article will dive into the source of this scheduling information within the ReportServer database.

This dive will be a bit more detailed than the first time I dove into SSRS scheduling – here. That particular dive was missing an important set of data.

Deeper Dive

My first dive into building a report of the report schedules was pretty comprehensive and I used it quite frequently. Many others also used it regularly for their environments as well. So the first attempt wasn’t bad by any stretch. The main problem (at least for now) is that the script does not account for any of the custom schedules that can be built. I have to be honest in that I hadn’t really considered that feasibility. Times and experience change that perspective. When that perspective changes, it is time to dive back in and add coverage for the shortcoming in the script.

When I dove back in to fetch the custom scheduling information, I realized there was a discrepancy even in the old report in that I was not gathering Job information pertinent to the schedule. Recall that SSRS subscriptions are performed via the SQL Agent. With that in mind, it is reasonable that the Agent job information is pertinent and germane to the report subscription and probably should be included in a report. Couple that with the methods on creating custom schedules for the SSRS reports, and we have a resounding need to ensure that data is trapped properly.

Due to this epiphany, I have now a more complete script to include both the data from SQL Agent as well as the data from the ReportServer database in regards to subscriptions and schedules of reports.

Script

In pulling the data together from the two sources, I opted to return two result sets. Not just two disparate result sets, but rather two result sets that each pertained to both the agent job information as well as the ReportServer scheduling data. For instance, I took all of the subscriptions in the ReportServer and joined that data to the job system to glean information from there into one result set. And I did the reverse as well. You will see when looking at the query and data. One of the reasons for doing it this way was to make this easier to assimilate into an SSRS style report.

As you can see, it is not a short script. By fair measure, it is also considerably more complex than the XML version that was recently posted (and mentioned earlier in this article). That said, it is many times more flexible and complete than the XML version as well. I do continue to use the bit math for figuring the schedules as I did in the first version of the script. When done this way, I can handle the custom schedules as well as get extensive details about the schedule from both the msdb and ReportServer databases.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. This version of the report will help you retrieve the data from both a job perspective and from the report scheduler perspective. Through this series of articles, you should be confident in being able to now create custom schedules as well as accurately report on any reports that have specific subscriptions/schedules.

SSRS Subscription Schedules

Reporting Services

Reporting Services (SSRS) is a tool that permits you to create and deliver feature rich reports. The reports can be delivered in various formats and can even be scheduled to be delivered at various times. I even recently wrote about creating more advanced custom schedules.

With the ability to create content delivery schedules, or to subscribe to report content delivery, this imposes a requirement to also know when the various reports are scheduled to be delivered.

SSRS provides the means to be able to review the scheduled reports (subscriptions). That means is held within the ReportServer database. This article will help to uncover one of the sources of this scheduling information within the ReportServer database.

Review Schedules

When looking into the database for SSRS, I can see there are different means to be able to review the report schedules. This article is going to cover just one of those methods. And if I am going to be entirely up front about this method, I don’t like it and I recommend that it not be use.

I can hear the moans now. “If you don’t like it, then why show it to us?” Well, that is a very good question and there is a very good reason for this decision. A lesson I learned a long time ago is sometimes you need to learn the hard way, or less desirable way, to do various things. One of my favorite Calculus teachers from years ago drilled this into my head over and over again. Why? Well, there are three good reasons that come to mind: a) it makes the more desirable method seem much easier, b) it helps you to appreciate the more desirable method all that much more, and c) because if all else fails, you will have another method to fall back to just in case.

Less Disérables

The least desirable method (at least of the methods I will share) is to parse XML from a field stored in the ReportServer database. If I look into the Schedule table within the RepotServer database, I will find this column called MatchData. Up front, this field is not very intuitively named. I would not think this field actually represented the schedule, but it actually does.

Before we start diving into parsing XML, we need an example of what this XML may look like. The following will provide that very example that we need.

Are your gears grinding yet? The XML is not terribly difficult to follow. I am sure you have realized the problem from this format at this point. If I query this to make it human readable in a tabular format (you know DBA format), I will end up with a really wide table that is pretty ugly to look at (unless I get super creative to combine fields etc).

Let’s take a look at the query to parse something like the preceding XML example.

And there we have that ugly query to produce a really wide ugly result set. The query is not difficult to write. It’s just extremely repetitive. In similar fashion, the results are very repetitive. This makes, in my eyes, this particular method less desirable.

I haven’t even gotten to the part about the shortcoming in scheduling reports through SSRS that I wrote about recently – here. In that article I discussed a workaround to overcome the SSRS scheduling options. If you employ methods such as I discussed there, then this query will never fully cover the scheduling related to your reports. Because of that, I will be discussing the better solution in the next article.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. While parsing the XML is less desirable than what I will be sharing in the near future, it is better than doing nothing at all. I recommend you start looking into the various report schedules you may already have in your environment. Also, stay tuned for the next article that will better show these schedules.

SSRS Custom Shared Schedule

Reporting Services

 

Reporting Services is a pretty feature rich tool for delivering reports to various consumers. There is plenty of power within Reporting Services (SSRS) giving one the ability to perform visualizations, render reports in various formats and even schedule reports to be delivered in different formats or on different schedules.

Unfortunately, the scheduling capability within SSRS is fairly weak. While it is true that one can accomplish a varied array of different schedules, the scheduling of SSRS is far below the power of SQL Agent (for instance).

I will explore the deficiency of the scheduling tool within SSRS in a very specific case. You may even be familiar with this deficiency already. Many organizations have the need to produce end of month reports that need to run on the last day of the month. If you are familiar with the scheduling tool within SSRS, you already know that this is not possible (at least through SSRS 2014). This article will help step you through how to schedule a report subscription to run on the last day of the month.

Schedule Options

Before diving into the custom schedules, let’s take a closer look at the options available for scheduling through SSRS.

While the tool does provide various options, the granularity certainly is not diverse enough to cover many legitimate scheduling needs – especially the “last day of month” requirement for many month end type of reports.

As you may be aware, SSRS subscriptions are actually run through the SQL Agent despite being set through SSRS. If I take a look at some of the scheduling options in SQL Agent, I can see the following.

Note here that there are various built-in options to schedule on a monthly basis, including the “last” option. If I wanted to look even closer at the available options I would see the following.

As you can see here, I can schedule for multiple different types of “last” options relative to the month. One of these options happens to be the last day. This helps to illustrate just how much more powerful and versatile the scheduling within SQL Agent is than what we get with the SSRS scheduler. All of this despite the fact that SSRS subscriptions are actually executed by the SQL Agent. Doesn’t that seem a bit odd?

Custom Schedule

Now that we better understand the limitations of the SSRS scheduler and given that SSRS subscriptions are executed through the SQL Agent, let’s move on to bigger, better and much more useful means of scheduling the SSRS reports.

The very first thing that you should do is to create a share schedule. This should be a shared schedule that is created as a run-once schedule. Let the schedule run that one time and then proceed on to the following steps. If you need help in creating a shared schedule, here is an msdn article. When you create the shared schedule, I recommend using a descriptive name that you can remember. This name will be useful in the next step. For the purposes of this article, my schedule is named “EndOfMonth”.

Once the schedule is created, the next thing to do is to query the ReportServer database. Make sure you know the name of your database. Some people have changed the ReportServer database name from the default. This is an important piece of information to remember. The query against the ReportServer database will be predominantly just to get the schedule id of the newly created schedule.

When I run that query, I receive the following results.

I now want to take that scheduleid and then use it to determine what SQL Agent job is actually related to that schedule so I can fetch some info from the job. I could skip this entirely and go to the subsequent step but this helps to understand what needs to be done in that subsequent step. So, from here let’s query the msdb database in SQL Server to fetch some info from the job system.

The scheduleid is used within a command within a jobstep. By passing the scheduleid into this query and then comparing against the existing job steps, I am able to retrieve the job that is related to the SSRS shared schedule (subscription). When I run the preceding query, I receive results illustrated in the following image.

Take note of the items I highlighted in red. Within the job name, step name and command text for the step, I can find the scheduleid for that shared schedule that I created. Yes, I could easily have changed the query I used to compare to the job name and that would have worked just fine in this case. By querying the command, I can confirm that the schedule is actually being used. The most important piece of information in this result is the entire command text for the job. I will need to take this command text and use it to populate a brand new SQL Agent job. This is how I will get my custom schedule for the SSRS subscription.

From here, I just need to create a SQL Agent job that uses the options for a monthly schedule indicating last day from the two drop down menus illustrated previously in this article. Then all that is left is a sigh of relief and a boom bada bing.

Recap

SSRS does not have the built-in capability for some of the more complex and often times regularly required report schedules to meet various business requirements. By following the steps outlined in this article, you can circumvent that short-coming and achieve the needed business requirements while looking like a hero to those that need the more advanced report schedules.

Parse Dynamics AX Context Info

Dynamics AX and SQL Server

More and more I am seeing clients requiring assistance with the Microsoft Dynamics Suite. Each of the products in the suite comes with a different set of performance issues and gotchas. In this article I will only be discussing the AX product and an easy tweak to make troubleshooting that product much easier from the perspective of the database administrator. This tweak is to enable the context info from within the administration console.

Enable Context Info

Some may call this a critical setting that must be activated on every transaction heavy Dynamics AX AOS server. One of the most common reasons is that DAX user sessions frequently block one another. Occasionally the blocking may be uncomfortably long.

In  order to enable this setting on each DAX AOS server, the following steps should be followed:

  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DynamicsServer\6.01\Original (installed configuration). The last key, Original (installed configuration), is the key name for the current server configuration. If your system uses a different configuration than the original installed configuration, navigate to the currently active configuration.
  • Create a string registry value called ‘connectioncontext’ and set the value to 1.
  • Restart the AOS.

The steps to implement this change is not terribly difficult. The risk is very low as well. There is very minimal cost for sending this additional info. That said, even on an extremely busy system for one client, we have yet to see a negative impact with this setting enabled.

Easy Troubleshooting for the DBA

Now that the context info is enabled within the application, this is where the pains of troubleshooting AX performance issues within the database becomes somewhat easier for the database administrator. Just enabling the setting doesn’t bring you to the promised land though. You still need to do a bit of work.

Once the setting is enabled, what actually happens is the AX application starts to send an extra chunk of data along with each connection to the database server. This chunk of data is the context info. The context info that AX decides to send along is not straight-forward to read however. The AX context info is sent to SQL Server as a varbinary.

What does it mean to be in varbinary format for you as the database administrator? This means that you still have a bit of work to do. Do you need to perform that extra work every time you look at the data? Well, the short answer is “it depends”! If you are smart about your tool-set (e.g. set of administration scripts) then you will save this extra work there. If you do not yet have a tool-set and rewrite your queries every time – you obviously fall at the far opposite end of the spectrum and will have much more work to do.

Whichever end of the spectrum you fall within, here is script to integrate into your scripts to help make your AX DBA work just a tad bit easier.

So What does it DO?

So what value does this query actually bring you? I have talked about it making life easier by enabling the context info from within AX, but I didn’t dive into any details on what it will provide.

Looking at the query I just provided, one can surmise that the context info will provide two significant pieces of information. The first bit is the Session ID. This is not the spid within SQL Server. Rather this is the session id that is a different value within AX. The second piece of information that is highly valuable is the User that is tied to that AX Session. The AX application will show as the service account for the application on all spids within SQL Server. The spid and spid user are fairly useless when trying to figure out who is causing what level of pain since all users in SQL Server for AX will appear to be the same user. The SQL spid will be useless for the DAX admins because the spid will not match the DAX session id. Both of these factors will lead to an extra amount of frustration between the DBA and DAX Admin if in the middle of a performance slowdown.

Being able to extract the DAX User and Session ID from the context info will significantly reduce troubleshooting time when in the trenches trying to figure out who is running what from within the application. This reduces the chances of taking a guess and gives good solid evidence that can be taken back to the business users and try to improve their processes and the overall performance of the system.

User Contains Invalid Characters – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Logins and Users

It seems appropriate to re-introduce the concept of principals (aka Logins and Users). Rather than go into depth about principals here though, I will refer you to a recent article on the topic. The article in question was another “basics” article and can be found here.

invalidWith that out of the way, it should be conceded that creating principals is a common practice and possibly a frequent requirement of the data professional. While creating those principals, there is a good chance that one will run into an absurd error ever now and then. Today, I want to discuss one absurd error. The fix for the error may seem just as absurd as the error, but would be really easy to implement.

Invalid Characters

Here is the error message that is quite possible to encounter while creating principals.

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

At first look, this error makes absolutely no sense. The error states there is an invalid character somewhere in the string “SomeDomain\jason”, yet every character in that string is supported and normal for the collation. This can be a head-scratcher for sure.

To better understand this error, let’s try to reproduce the error. First, we need to create a login.

Here, I have used “SomeDomain” in lieu of my actual domain or local workstation name. This statement will complete successfully given the user exists within the domain or on the Windows workstation. Great so far!

The next step is to create a database user within the AdminDB (you can pick a database that exists in your environment) and map this user to the Login created in the previous step. This can be done with the following script:

Bam! Executing the script produces:

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

This is where a close inspection of the script is required. Due to a fabulous fat finger, a 0 (zero) instead of O (capital o) was typed in the second occurrence of “SomeDOmain”. This is easy enough to reproduce with a typo of any portion of the windows login that already exists in SQL as a login principal.

The Fix

The fix is insanely easy once you figure out that invalid character actually means you mis-typed the Login portion of the Create User statement. The fix is to type the login correctly. Knowing is half the battle! Running into this error in the wild could cause you a few minutes trying to figure it out and prepping to throw something through the monitor.

Recap

In this article I have shown how a simple mistake can lead to a really obtuse error message that doesn’t seem to make much sense. A little care and attention to properly typing the login names will save you a bit of time and hair on the troubleshooting end of creating principals.

What Agent Job is Running – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Active Queries

If you are like me, you have had the opportunity on more than one occasion to try and figure out what is currently active on your SQL Server. The reason to try and figure this is out is usually tied to some sort of performance issue that you have to dive in and troubleshoot.

When checking for current activity through sp_who2 or by querying the dmvs (sys.dm_exec_sessions, sys.dm_exec_requests, sys.dm_exec_connections), there is inevitably some sort of session that looks something like this:

active_job

This is not very helpful in this format. What I have seen most people do when they see this kind of result (and they care enough to know what is running) is to open up the “Job Activity Monitor” and then try to figure out manually what job is truly running. You can imagine the nightmare this becomes if there are more than a few jobs running.

Currently Running Agent Jobs

There is a significantly easier way to find the name and step of the agent job that is currently running when using your favorite dmv query to explore current activity. Let’s start with a simple query.

If you have jobs that are currently executing, then this query should return some results representative of the running jobs. Unfortunately, you only know that the source of the spid happens to be the SQLAgent. I am going to dirty up this simple query with quite a bit more query so it looks like the following:

There is a good reason for how much I have complicated the simple version of the query. I can leave the query significantly less complicated if not for the fact that I wanted to also know the sql text in a well formatted manner. That accounts for the entire segment in the cross apply.

To retrieve the name of the job that is running, I actually only need this little piece of code right here:

This takes the varbinary representation of the jobid string and converts to the human friendly form with the appropriate format of the string so we can compare it to the the actual job id and then finally get the job name. From there, I can then retrieve the job step to see exactly where in the process the job is presently executing.

Executing this query, I receive the following results for the job that I have executing right now.

decrypt_agentjob

If I take this code and then integrate it into my favorite query to check for running sessions while investigating issues on the server, I have become just that much more efficient as a DBA.

Recap

It is quite common to be required to investigate performance issues on the server. Even if not a performance issue, there are frequent needs that require us to know what queries are executing at various points in time throughout the day. A complication to this is the varbinary format of the job name that is represented as the program that is running during many of these spot checks.

The means to circumvent this complication is with a little extra code for your favorite script du jour used to investigate running sessions. I recommend adding a code segment, such as the code I have shown in this article, to help simplify your research tasks and help you look more like a rockstar. Of course, you could always resort to the other method touched on in this article – trial and guess through manual process of elimination via “Job Activity Monitor”.

Database Backups – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backups

db_backupIn my previous two articles, I touched on some data that is available to the data professional with regards to backups and database restores. In the article discussing the restore history data, I alluded to another topic related to restores that could have been discussed.

Today, I will be diving into that alternate direction. This direction is related to database backups. More specifically, how do you truly know if you have a successful backup? Here’s a hint: it has to deal with restores.

If you haven’t heard it before, here it is in simple terms “you do not have a backup until you have restored the backup“. If you ponder that for a minute, it is quite logical and makes plenty of sense. The only way to confirm that you have a backup is to test the purpose of the backup and that is to recover the database back to the state represented by the backup.

Testing Backups

Creating a database backup is a pretty straight-forward task. This is easy enough to do through the GUI or from tsql script, or db_restoreeven, *shudder*, from a maintenance plan. Creating the backup is the easy part. If you are not creating backups, I hope there is a good reason for it (and yes there are valid reasons for not creating a backup of certain databases).

This article will not explore the nuances of the ways to create a backup of a database. Suffice it to say, there are multiple options and methods. This article will focus on the second part of creating a successful backup – testing the backup. In other words, restoring the backup that was created.

So how does one go about testing a backup? The simple answer as already stated is to perform a restore. This means through the use of the restore command. But is it really that simple?

If I have the following backups available for my AdventureWorks2014 database, where should I test the validity of the backup?

avail_backups

Would it make much sense to take any of these backups and test the restore process on the same server as the source of the backup? To be honest, that is a question that may have an entirely different answer in each and every environment. That is a question that requires a little insight into the business needs, available resources and procedures and policies in place for the environment. For me, I typically like to create an automated restore process that will restore the previous night’s backup onto a test/stage/dev server that is not in production.

I have written previously on how I do this sort of automated restore. You can read all about it from the original article here. In that article I provide a script to assist with the restore of these backups. You are welcome to test it out and play around with it while setting up your backup validation environment. In using the script, the user assumes all risk.

I recommend an automated restore system to ensure the backups are tested on a routine basis. The benefits of doing this are more than just a few. That said here are a few of those benefits: 1. Confidence in backups, 2. Quick recovery in event of failure or disaster, 3. A usable environment for quick data comparison, 4. A passable environment for reporting, and the best benefit is that you can get a good nights sleep knowing your backups are reliable.

Caveat

With automated restores, there may be an occasional failure. Heck, you may run into a bit of a head-scratcher here or there trying to get them to work in the first place. It’s not technically easy the first time. With some practice, it gets much easier. This method is intended to be suitable for a cheap solution. There may be a pricier solution out there that can be bought. If that is in your budget – go with it. The main point is to do it.

After you get these restores working, I recommend using the scripts in this article to check the restore history from time to time. Take it an extra mile and generate some reports from that data.

Recap

I have provided some information on how and why to restore a database. The primary reason being that you never know how good your backup is until you have restored it. Some say you do not have a backup until you have restored it. This is a solution and some opinions on how to do that efficiently.

«page 2 of 26»

Calendar
April 2018
M T W T F S S
« Mar    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Welcome , today is Tuesday, April 24, 2018