Parse Dynamics AX Context Info

Dynamics AX and SQL Server

More and more I am seeing clients requiring assistance with the Microsoft Dynamics Suite. Each of the products in the suite comes with a different set of performance issues and gotchas. In this article I will only be discussing the AX product and an easy tweak to make troubleshooting that product much easier from the perspective of the database administrator. This tweak is to enable the context info from within the administration console.

Enable Context Info

Some may call this a critical setting that must be activated on every transaction heavy Dynamics AX AOS server. One of the most common reasons is that DAX user sessions frequently block one another. Occasionally the blocking may be uncomfortably long.

In  order to enable this setting on each DAX AOS server, the following steps should be followed:

  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DynamicsServer\6.01\Original (installed configuration). The last key, Original (installed configuration), is the key name for the current server configuration. If your system uses a different configuration than the original installed configuration, navigate to the currently active configuration.
  • Create a string registry value called ‘connectioncontext’ and set the value to 1.
  • Restart the AOS.

The steps to implement this change is not terribly difficult. The risk is very low as well. There is very minimal cost for sending this additional info. That said, even on an extremely busy system for one client, we have yet to see a negative impact with this setting enabled.

Easy Troubleshooting for the DBA

Now that the context info is enabled within the application, this is where the pains of troubleshooting AX performance issues within the database becomes somewhat easier for the database administrator. Just enabling the setting doesn’t bring you to the promised land though. You still need to do a bit of work.

Once the setting is enabled, what actually happens is the AX application starts to send an extra chunk of data along with each connection to the database server. This chunk of data is the context info. The context info that AX decides to send along is not straight-forward to read however. The AX context info is sent to SQL Server as a varbinary.

What does it mean to be in varbinary format for you as the database administrator? This means that you still have a bit of work to do. Do you need to perform that extra work every time you look at the data? Well, the short answer is “it depends”! If you are smart about your tool-set (e.g. set of administration scripts) then you will save this extra work there. If you do not yet have a tool-set and rewrite your queries every time – you obviously fall at the far opposite end of the spectrum and will have much more work to do.

Whichever end of the spectrum you fall within, here is script to integrate into your scripts to help make your AX DBA work just a tad bit easier.

So What does it DO?

So what value does this query actually bring you? I have talked about it making life easier by enabling the context info from within AX, but I didn’t dive into any details on what it will provide.

Looking at the query I just provided, one can surmise that the context info will provide two significant pieces of information. The first bit is the Session ID. This is not the spid within SQL Server. Rather this is the session id that is a different value within AX. The second piece of information that is highly valuable is the User that is tied to that AX Session. The AX application will show as the service account for the application on all spids within SQL Server. The spid and spid user are fairly useless when trying to figure out who is causing what level of pain since all users in SQL Server for AX will appear to be the same user. The SQL spid will be useless for the DAX admins because the spid will not match the DAX session id. Both of these factors will lead to an extra amount of frustration between the DBA and DAX Admin if in the middle of a performance slowdown.

Being able to extract the DAX User and Session ID from the context info will significantly reduce troubleshooting time when in the trenches trying to figure out who is running what from within the application. This reduces the chances of taking a guess and gives good solid evidence that can be taken back to the business users and try to improve their processes and the overall performance of the system.

User Contains Invalid Characters – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Logins and Users

It seems appropriate to re-introduce the concept of principals (aka Logins and Users). Rather than go into depth about principals here though, I will refer you to a recent article on the topic. The article in question was another “basics” article and can be found here.

invalidWith that out of the way, it should be conceded that creating principals is a common practice and possibly a frequent requirement of the data professional. While creating those principals, there is a good chance that one will run into an absurd error ever now and then. Today, I want to discuss one absurd error. The fix for the error may seem just as absurd as the error, but would be really easy to implement.

Invalid Characters

Here is the error message that is quite possible to encounter while creating principals.

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

At first look, this error makes absolutely no sense. The error states there is an invalid character somewhere in the string “SomeDomain\jason”, yet every character in that string is supported and normal for the collation. This can be a head-scratcher for sure.

To better understand this error, let’s try to reproduce the error. First, we need to create a login.

Here, I have used “SomeDomain” in lieu of my actual domain or local workstation name. This statement will complete successfully given the user exists within the domain or on the Windows workstation. Great so far!

The next step is to create a database user within the AdminDB (you can pick a database that exists in your environment) and map this user to the Login created in the previous step. This can be done with the following script:

Bam! Executing the script produces:

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

This is where a close inspection of the script is required. Due to a fabulous fat finger, a 0 (zero) instead of O (capital o) was typed in the second occurrence of “SomeDOmain”. This is easy enough to reproduce with a typo of any portion of the windows login that already exists in SQL as a login principal.

The Fix

The fix is insanely easy once you figure out that invalid character actually means you mis-typed the Login portion of the Create User statement. The fix is to type the login correctly. Knowing is half the battle! Running into this error in the wild could cause you a few minutes trying to figure it out and prepping to throw something through the monitor.

Recap

In this article I have shown how a simple mistake can lead to a really obtuse error message that doesn’t seem to make much sense. A little care and attention to properly typing the login names will save you a bit of time and hair on the troubleshooting end of creating principals.

What Agent Job is Running – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Active Queries

If you are like me, you have had the opportunity on more than one occasion to try and figure out what is currently active on your SQL Server. The reason to try and figure this is out is usually tied to some sort of performance issue that you have to dive in and troubleshoot.

When checking for current activity through sp_who2 or by querying the dmvs (sys.dm_exec_sessions, sys.dm_exec_requests, sys.dm_exec_connections), there is inevitably some sort of session that looks something like this:

active_job

This is not very helpful in this format. What I have seen most people do when they see this kind of result (and they care enough to know what is running) is to open up the “Job Activity Monitor” and then try to figure out manually what job is truly running. You can imagine the nightmare this becomes if there are more than a few jobs running.

Currently Running Agent Jobs

There is a significantly easier way to find the name and step of the agent job that is currently running when using your favorite dmv query to explore current activity. Let’s start with a simple query.

If you have jobs that are currently executing, then this query should return some results representative of the running jobs. Unfortunately, you only know that the source of the spid happens to be the SQLAgent. I am going to dirty up this simple query with quite a bit more query so it looks like the following:

There is a good reason for how much I have complicated the simple version of the query. I can leave the query significantly less complicated if not for the fact that I wanted to also know the sql text in a well formatted manner. That accounts for the entire segment in the cross apply.

To retrieve the name of the job that is running, I actually only need this little piece of code right here:

This takes the varbinary representation of the jobid string and converts to the human friendly form with the appropriate format of the string so we can compare it to the the actual job id and then finally get the job name. From there, I can then retrieve the job step to see exactly where in the process the job is presently executing.

Executing this query, I receive the following results for the job that I have executing right now.

decrypt_agentjob

If I take this code and then integrate it into my favorite query to check for running sessions while investigating issues on the server, I have become just that much more efficient as a DBA.

Recap

It is quite common to be required to investigate performance issues on the server. Even if not a performance issue, there are frequent needs that require us to know what queries are executing at various points in time throughout the day. A complication to this is the varbinary format of the job name that is represented as the program that is running during many of these spot checks.

The means to circumvent this complication is with a little extra code for your favorite script du jour used to investigate running sessions. I recommend adding a code segment, such as the code I have shown in this article, to help simplify your research tasks and help you look more like a rockstar. Of course, you could always resort to the other method touched on in this article – trial and guess through manual process of elimination via “Job Activity Monitor”.

Database Backups – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backups

db_backupIn my previous two articles, I touched on some data that is available to the data professional with regards to backups and database restores. In the article discussing the restore history data, I alluded to another topic related to restores that could have been discussed.

Today, I will be diving into that alternate direction. This direction is related to database backups. More specifically, how do you truly know if you have a successful backup? Here’s a hint: it has to deal with restores.

If you haven’t heard it before, here it is in simple terms “you do not have a backup until you have restored the backup“. If you ponder that for a minute, it is quite logical and makes plenty of sense. The only way to confirm that you have a backup is to test the purpose of the backup and that is to recover the database back to the state represented by the backup.

Testing Backups

Creating a database backup is a pretty straight-forward task. This is easy enough to do through the GUI or from tsql script, or db_restoreeven, *shudder*, from a maintenance plan. Creating the backup is the easy part. If you are not creating backups, I hope there is a good reason for it (and yes there are valid reasons for not creating a backup of certain databases).

This article will not explore the nuances of the ways to create a backup of a database. Suffice it to say, there are multiple options and methods. This article will focus on the second part of creating a successful backup – testing the backup. In other words, restoring the backup that was created.

So how does one go about testing a backup? The simple answer as already stated is to perform a restore. This means through the use of the restore command. But is it really that simple?

If I have the following backups available for my AdventureWorks2014 database, where should I test the validity of the backup?

avail_backups

Would it make much sense to take any of these backups and test the restore process on the same server as the source of the backup? To be honest, that is a question that may have an entirely different answer in each and every environment. That is a question that requires a little insight into the business needs, available resources and procedures and policies in place for the environment. For me, I typically like to create an automated restore process that will restore the previous night’s backup onto a test/stage/dev server that is not in production.

I have written previously on how I do this sort of automated restore. You can read all about it from the original article here. In that article I provide a script to assist with the restore of these backups. You are welcome to test it out and play around with it while setting up your backup validation environment. In using the script, the user assumes all risk.

I recommend an automated restore system to ensure the backups are tested on a routine basis. The benefits of doing this are more than just a few. That said here are a few of those benefits: 1. Confidence in backups, 2. Quick recovery in event of failure or disaster, 3. A usable environment for quick data comparison, 4. A passable environment for reporting, and the best benefit is that you can get a good nights sleep knowing your backups are reliable.

Caveat

With automated restores, there may be an occasional failure. Heck, you may run into a bit of a head-scratcher here or there trying to get them to work in the first place. It’s not technically easy the first time. With some practice, it gets much easier. This method is intended to be suitable for a cheap solution. There may be a pricier solution out there that can be bought. If that is in your budget – go with it. The main point is to do it.

After you get these restores working, I recommend using the scripts in this article to check the restore history from time to time. Take it an extra mile and generate some reports from that data.

Recap

I have provided some information on how and why to restore a database. The primary reason being that you never know how good your backup is until you have restored it. Some say you do not have a backup until you have restored it. This is a solution and some opinions on how to do that efficiently.

Database Backup History – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backup History

db_backupAs a data professional, there is little to no doubt that one of the most important tasks is to ensure the data is backed up regularly. We do this in preparation for several reasons. Whether it be to be able to recover in the event of a problem or to compare yesterdays data to how it looks today, the backup is done.

When doing a backup, how often do you report on the backup success? How frequently do you refer to the backup history to compare multiple runs or even generate reports for interested parties? The fact of the matter is that this history should be referenced on a routine basis to generate reports that should be read and not ignored.

Backup Reporting

Right about here you may be asking yourself why would somebody generate reports on the backups. After all, the job succeeds or fails and you respond appropriately to that, right? Does one really need to generate a report about the success of the backup job? Well, that is for you to decide. I would say it wouldn’t hurt to generate a report about the success rate of your backups. Management often likes to see reports concerning important processes. That said, that is not the reporting that I am referencing here.

The reporting I believe is useful in this case is indeed related to the historical information of your backups, but it is less about the success of the job and so forth. Of great benefit from the backup history data is the information on frequency of backup and also the size of the backup. These pieces of information can be somewhat useful when looking to forecast data growth. That is one example of a possible report when working with the backup history data that is stored.

Let’s take a look at a query that will help retrieve some of this backup history data.

Off the top, yes the script uses dynamic sql. I use dynamic sql because I use this script on multiple versions of SQL Server. One of the data points I gather in this script is the compressed backup size. That attribute is not available in SQL 2005. Due to that, and not wanting to maintain multiple scripts, I use the dynamic sql to account for that missing column.

I have also set the script to accept a database name parameter. If a name is provided, then only the backup history for that database is returned. If the parameter is left NULL, then the backup history for all databases will be returned. Additionally, I added a number of days parameter to limit the scope of the report to a specific range of days.

Among the data points returned in this script, you will note there is the duration of the backup, the date, and even the size of the backup. All of these attributes can help me to forecast future storage requirements both for the backup storage as well as for the data volume. Additionally, by knowing the duration of the backup and the trend of that duration, I can adjust maintenance schedules accordingly.

Caveat

In this particular script, I only retrieve the backup history for two backup types: FULL and DIFFERENTIAL. A small adjustment can be made to include other backup types if necessary. The line to change is:

Other possible values for backups are:

  • D = Database
  • I = Differential database
  • L = Log
  • F = File or filegroup
  • G =Differential file
  • P = Partial
  • Q = Differential partial

If you are interested in learning more about backup information, you can read this msdn article – here.

Recap

I have provided a quick example of how to retrieve pertinent backup history information. This information can be used to assist in forecasting storage requirements as well as assist in the job scheduling as backup jobs begin to run longer and longer as the data gets larger and larger.

For a related article, check out this article on how to retrieve the restore history from the msdb database.

Last Restore of a Database – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Restore

db_restoreAs a data professional, one should be somewhat cognitive of backups with regards to the data. From the point of the creation of the backup, we can go in one of two directions with this article. Due to the importance of each of the directions, I will devote a basics article to each.

The order of publication of these articles in no means denotes a level of priority to the topic. I feel each is equally important. The two directions I see both deal with the restore of that backup that was created. One direction is in regards to reporting and the other direction is the actual restore.

Last Restore Date

Something that I seem to be getting asked more and more frequently is how to determine when a database was last restored. Maybe it is just a hot topic and maybe it will be short lived, but it sure seems to be something that is on the tip of peoples minds of late. So, how does one determine the last time a database was restored? The answer is actually quite simple. Within the msdb database, there is a table (that maintains the history of database restores) called restorehistory.

The restorehistory table stores information that will help you better understand what kind of restore was performed, who did the restore, when it was done, and even some information about the backup that was used to perform the restore. You can read more about the specifics of this table from the msdn article – here.

A quick means to query this table is via the following script:

Now, that is an extremely simple query looking at my TestDB database. The results of that query on my system are as follows:

db_restores_example

As is shown in the preceding illustration, I have multiple restores and restore types of this particular database. This is a good start. Let’s take this query up a notch. Not all production environments will have databases that are restored on the same server. What do we do to report on databases that are restored on the production server vs. the databases that exist on the production server? That is where this next query comes in to play.

In this latter query, I have added a few things to help for various reporting needs. You never know when the CIO or an auditor is going to come to you with a request to filter the data differently for restored databases. In this vein, I added parameters that will help filter the results down to a specific database, return all databases, or return all databases that have a restore history on the server. In this case, I decided to use the AdventureWorks2014 database as my example database. You will want to change the parameter as needed. If you pass a null value to the database name, then all databases can be returned in the result set – depending on the values used for the remaining parameters.

Caveat

This is all fine and well if the databases are restored onto the same server as the source database, right? What about the case where the database is restored to a different server? Well, the query still works for reporting the restored databases on the server where the database is restored. The complexity comes when trying to correlate back to the original production server. My recommendation here is good documentation and an administrative linked server that is not accessible via the application accounts or any user but the DBA group.

Recap

I have provided two quick examples of how to retrieve the restore history for your databases. I recommend that the restore history be checked on a routine basis. You never know when an over-permissioned user may decide to restore a database for you and then have all of the rest of the users coming to you to determine what happened. This will also be essential when we routinely test our database backups. Stay tuned for the next article where I will explore that aspect of database restores.

Finding Deprecated Uses in SQL Server

 

sqlbasic_sarge

How well do you know your environment? You probably know all of the jobs that are running, the frequency that indexes need to be rebuilt, and even which users have which level of access to each object in the SQL Server instance. Do you know that your applications are accessing deprecated datatypes over 300 million times a week? What if your TSQL constructs are a bit archaic? Do you know that the code needs to be updated? Do you know how to find deprecated uses in SQL Server?

In this article, I will explore how to use Extended Events to track feature use and abuse. To be more precise, I will share how this tool can help you better understand all of the ways that your applications have been abusing your database by continuing to employ the use of deprecated features, syntax, or constructs in general. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Audit Deprecated Uses

redxI would dare say that most data professionals think there is some use of deprecated constructs, datatypes or features within their environment. I would double down on that and say that most do not know just how bad it really may be. To find just how bad it really is, we need to audit for the use and abuse of these deprecation events.

Right here would be a good time to point out that your mileage may vary. Some items that are deprecated are more painful than others. Some may be deprecated and may have been on the list for 10+ years at this point. The point is, know your environment and then use good judgement to determine which items in your results need the most attention to fix and update. Why? Well, things really may break especially if you are looking to upgrade to a new version of SQL Server. Just because an item is still available in your current edition, that does not ensure it will still be available in a future release of SQL Server.

Now for the juicy stuff. As I mentioned, finding when and where a deprecated feature or syntax is employed, there are a couple of neat little events within Extended Events that can help to track each time a deprecated feature is accessed or employed. How do we do that? Use the deprecation_announcement and deprecation_final_support events. To help create sessions to track these events, I have the following script to create an “audit” session to do exactly that.

 

And just in case you are using 2008 or 2008R2, use this version instead.

Slight differences between these two sessions. First, in the 2008 version of the script, I rotten_orangehave to provide database ids instead of names. That is a shortcoming of 2008 and 2008R2 implementations of Extended Events. Additionally, the file target is different between the two (recall that they renamed the file target). And lastly, there are a few actions that I included in the 2012 version of the script that are not available in 2008 and R2.

With the session in place, I am now going to run through some sample scripts that will generate deprecation events to occur. I am sticking with my 2014 instance for this segment. That is important to note because different events may occur for different versions of SQL Server. Additionally, the parse script I will share will require a slight change for 2008 and r2 (again related to the file target name).

Now to take a peek at the data with this next script.

Now at long last, I can see what kind of data I am generating (they are really wide so I am just posting a snip).

deprecated_features_results

 

 

 

 

With this, I can see the feature_id along with the description and even the tsql that generated the event. What you don’t see in this is that I also trap the source machine and the user name. If there is an application name included in the connection string, I also trap that. These pieces of data can prove critical to efficiently troubleshooting and finding the source of these events.

From here, one might wish to explore all of the events generated from this session in order to ensure the environment is properly prepared for upgrade. Most tools do not evaluate the code thoroughly to trap all of these events. Instead they do a cursory look through stored procedures or at the data types. As we all should know, not every piece of SQL code is actually stored in the database or even is it cached at the time of analysis. This is the type of thing that requires a long running trace to prove that you are that rockstar DBA.

Conclusion

In the article today, I have shown how it is possible to see the deprecation alerts that may be generated in your environment. This data is what can help set you apart as a rockstar when it comes time for that migration. If you have yet to read my series on Extended Events, I highly recommend it. You can find that series here.

Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Events By Feature

sql_features_xeWithin the world of SQL Server there are a few things one can be certain of – things will change. This is true of the features in SQL Server. Additionally, Extended Events is constantly evolving which underscores this constant change.

With all of this change occurring within SQL Server, sometimes it is difficult to figure out how to troubleshoot or track issues that relate to new features. Within the need to figure out how to troubleshoot the new features, there is the need to understand what tools are out there to help troubleshoot.

It is no big secret that Profiler cannot help you trace any of the new features. To help perform the task of tracing events related to new features one must take advantage of the power of Extended Events. Even knowing that you can use XEvents, there is still a need to know what is related to the feature.

In this article, I will show how you can discover the events related to specific features. In addition, I will provide enough base information to help you perform other discovery type queries on your own as you continue your exploration of XEvents. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Events and Features

More and more I am being asked how to track which events belong to which features. I have also been seeing more people ask for a way to list the new features supported by XEvents. Thankfully there is adequate information within the XEvent metadata to help retrieve this type of information. The downside is that there is a bit of homework that must be done across a few versions of SQL Server to help produce the desired information. I have done that work and pulled the information into the following query to help make it easier for anybody else wishing to dive in and compare features and events across versions of SQL Server.

In the preceding query, I have provided an aggregation of the events and features across multiple editions of SQL Server. This aggregate of data is necessary to help see how the features are changing and how the events are changing within the features that remain constant from one version to the next.

Once that data is aggregated, I then provide a couple of sample queries that show what can be done with the data. I need to stress here that I have not provided an exhaustive set of queries to explore this data seven ways to Sunday. I have left those types of exercises up to the reader to explore and experiment.

The first example query shows how to pull the new features that can be “Traced” in the current version of SQL Server on which the query is being executed. The second example provides a difference count between versions of SQL Server to help illustrate the evolution of XEvents within SQL Server. The third query is a simple query to list out the number of events for each feature in your version of SQL Server.

From here, one might wish to explore all of the events that are related to a specific feature. This would be easily accomplished by querying out the data from the #presel temp table based on the feature name. Orrrr…one could query the desired feature by following the guidelines in this article.

Conclusion

In the article today, I have shown how it is possible to see the new events and how they relate to the various features within SQL Server. Being able to correlate events that can trap information about new features can and will help you evolve into that rock-star DBA you are trying to become!

Finding the Right Path

xe_path1I have a fairly large backlog of articles that are in progress or that are planned for my series about Extended Events. Despite the backlog and planned articles, every now and then something else comes up that bumps things around. This article is an example of bumping the schedule around. You can see some of the backlog and the full list of articles in the series by visiting the table of contents – here.

I bring that up for a couple of reasons. First and foremost being that the topic was recently raised as a “need” by some colleagues. The second being that I see the need and how it was lacking in coverage by anything I had already written.

What is this gaping hole in the coverage of Extended Events? To be honest, it is not a very complicated topic or very difficult gap to fill. It’s just something that has been overlooked. The gap boils down to this: how does one consistently find the correct path to the Extended Event Log file (XEL file)?

Filling the Gap

The gap I will be working to fill in this article deals with consistently finding the file path for Extended Event (XE) sessions. This gap rises due a few different things such as the ability to define a target in different manners, being able to move the logs directory, or even the fact that a target may not be added to the session (let alone a file target). These causes can all contribute to a bit of frustration and may pose as different symptoms when trying to get the file path.

One additional complication is tied to the running state of a session. For that complication, I talked briefly about it in my article about better practices. The state of the session could have an impact and could cause frustration when trying to retrieve the file path. All of these things are considerations that must be made when trying to retrieve the file path.

To find the file path, let’s start with some basics. The default path for XEL files is in the log directory of the instance. In addition to this default behavior, each session stores metadata about running sessions as well as deployed sessions that is accessible from DMVs and system catalogs.

Log Path

The first basic to tackle is the log path for the instance. I can query for the log path of the error log for the instance and rely on that as the path of my xel files. Should I choose this method, then I could execute the following query.

Executing that query on my SQL 2014 instance produces the following results:

errorpath

Now the obvious problems with this method come from the fact that relying on this data is relying upon an assumption that you have set all of your XE Sessions to use the default log path. If you have declared your sessions to use a file target and did not specify a path, then the assumption is safe. However, if you are detail oriented, you probably have been somewhat explicit in how you define your event file target. This brings us to the next topic – defining the path.

Defining The Path

While a bit of bird-walk, it is necessary to cover this topic at this juncture. This brief discussion will help to understand some of the other issues with retrieving the path consistently.

When defining the path of the event path, there is a bit of flexibility in how one can define the file to be used. You can either declare the file as just the file name, or you can define the file as the folder path along with the file name. Both methods are completely legitimate. Unfortunately, this flexibility is what causes some of the pain with retrieving the file path consistently.

Let’s take a look at two quick, and acceptable, ways to add an event file to an XE Session. I will be re-using a session from a previous article for these examples.

This is a pretty standard format I use for creating my sessions. The section I want to highlight though is the set of the filename near the end of the script. I will generally define the entire path for my xel files when I create a session – just as I have done in this example. Defining this path helps me to know exactly where I am putting the session for starters. Equally as important is that this path is easier to retrieve from metadata because I have explicitly defined the path.

Take this next example of the same session but with one minor difference.

The minor difference in this example is just in how the filename was defined. It is technically accurate and acceptable to only use the filename instead of the path and filename as I did in the previous example. The problem here comes from the retrieval of the path from metadata.

Getting the Path

In the previous two examples, I showed two methods of defining the filename within a session. In the former example, this means I can more easily find the path. However, if I use the script in the beginning of the article to find my xel file paths, then I have made an inaccurate assumption. This leads to an inconsistency in how the file path is fetched. In the latter example, the assumptions concerning the default log path would be valid and could be applied here.

Since the assumptions do not prove to be consistent or accurate across the board, we need to both evaluate how the data for each would look and we would need to see how to retrieve this path more consistently. First up is looking at how the data for each of these example sessions would be stored in metadata.

In the preceding query, I am looking at the deployed session metadata for two sessions, both of which start with “AuditSelect”. In the sys.server_event_session_fields view, there is an attribute called name that contains the value “filename”. This value will only exist if the session has an event file target defined for the session. Running the query will yield the following results (on my system).

eventfile_meta

In this result set, I can see there is one of each of the two event file definition methods I described in the previous section. In the green highlight you will see that I have the entire filepath. In the red highlight, you will only see the filename. Nothing special has been done to this data in the return as you can verify from the posted script. These are the actual stored values. So this would indeed seem like we have a bit of a problem, right?

Well, this is where we get to be a little bit imaginative and employ a script such as the following:

Wow, that is considerably longer than the first example in the article. That is true! This script accounts for a few of the conditions that cause inconsistencies (but not yet all of them) in retrieving the file path for a deployed event session. The key here is to understand this will fetch the path whether you enter a complete path or not while creating the session. I also through a check in there to confirm that the session has a file target as well as a check to ensure the session is deployed to the server.

Looking at the output, I get results that are somewhat friendly and easy to understand without any of the guesswork.

xelpath_result

What if the session has never been started and did not have the full path declared? Well, that is one of the inconsistencies I am working on still.

There is an alternative to this method as well. For the time being, this method would also be recommended in the event the session being researched happens to be one of the system sessions that is “private”. This next code chunk will show two different methods to parse the file path from running session metadata in the DMVs.

Conclusion

In the article today, I have shown some of the internals to retrieving file paths for Extended Event Sessions. I dove into metadata to pull out the path for the session and discussed some concerns for some of these methods. In the end, you have a few viable options to help retrieve the file path in a more consistent fashion.

«page 1 of 25








Calendar
March 2017
M T W T F S S
« Jan    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, March 28, 2017