Summiting that Technical Challenge

Comments: No Comments
Published on: January 9, 2018

Conquering Challenges

I really like the topic of the TSQLTuesday party today. Today we have been instructed in no uncertain terms to share a technical challenge with the world with which we were confronted. Ideally we will have conquered that challenge. Sometimes, we may not have conquered the technical challenge. We seldom like to share those stories though. That said, whether or not we conquer the technical challenge, there is another equally important part of the challenge – the journey.

Before getting too far astray with stories and challenges, it is important to note that the host of the party this month is Arun Sirpal (blog | twitter). Arun has posted the invite for the party here. Go ahead and check it out in all of its original glory!

Now back to the challenge at hand – conquering that technical problem and getting that high associated with the metaphorical high of summiting a tough task.

Summits

My technical challenge is not so much of a SQL Server challenge, and is minutely related to a database issue. Let’s start with the broad brushstrokes.

My blog was being hosted on an aging platform. The platform was old enough that it was several versions behind on php. Certain plugins were at a point that I could no longer use the full functionality. In addition, I just needed to ensure that it was updated. Because of this, I was informed that I had to migrate to a different platform provided by the same hosting provider. This is where the fun began.

The migration to the new platform was a simple button click. Unfortunately, despite the promise of a complete site migration – it was not. 99% of everything migrated. Little things like security and my domain name did not migrate. Ok – easy enough, I can update the site to use the domain name for my blog but I had to release it from the old platform first. Upon releasing the domain name from the old platform the “pageok” page error showed for a while until the change was fully propagated. This was a bit of a nuisance and somewhat bothersome for visitors. Believe me – it was a nuisance for me too!

After getting the site to start rendering pages properly, I soon found that there was no longer the appropriate security in place for various things. As I started to go through and try to fix the security, I found I no longer had the ability to manage any of the permissions on my site. In addition, the hosting company could not make the appropriate permissions changes either. What was the recommendation at this point? Another migration to another platform. Commence steam shooting from ears at this point. As I saw it, I had two choices go back to the old dying platform or go to this third platform. I opted for the new trendy flashy shiny platform.

Ruh Roh

At this point, I think it is a good idea to mention that all of this happened just as the new year turned. In addition, I was trying to get last minute edits made for a blog series I had running. This was all on January 2-3, 2018. What a way to kick off the New Year from a blogging point of view. Yeehaw!

I was assured I would have greater control over the new platform. In addition, I was assured it would not take very long. The process was straightforward as long as I had my backups. Believe me, I had my backups. They were even on two sites at this point.

What I did not know was that the hosting company had decided to come in behind me on my original site and decided to make changes. These were changes that complicated the whole process substantially. I only discovered they had made changes after I tried to go back to the original site and access my backups. Guess what? I could no longer access it due to the changes the hosting company had made. Steam is getting hotter now.

In the meantime, I had already started the transition of my domain name to the new platform. Given the new circumstances, this was a mistake. I should have waited to do this until the very end to ensure the site was at least accessible in the meantime.

Once the domain migration finally completed and all of my site files were transferred, users were at least greeted with something new.

Ok, this is in part due to a missing database. That took a while to finally get access back to the original site so I could grab my current backup (which was taken right before the migration). Once finally restored, I started getting this next one.

Lucky for me, this only took a bunch of google-fu to find exactly what I could do to fix it. It seems there was an issue in a config file and I was able to correct the database connection string. Yay – small victory, right? Well, sort of. The blog was fine as long as no links were clicked. You see, there was another change made that caused all of my permalinks to update to one of my alternate domains (again done by the hosting company). Any time I clicked a link I was stumped by why the link was trying to forward to the alternate domain. So because of this link issue, I was back to getting the constant 404 error from a few paragraphs back. (Pretty much straight flames shooting from the ears now.)

Again, more google-fu and I was able to find another fix. This one was part database related and part admin console. Having made the updates in all the requisite places, I was able to now click on most links and the site was at 90% functionality.

This was a full day of working on trying to fix this problem. At this point with the site up, I was able to confirm that I could enact the security policies I wanted/needed. I was also able to confirm most links were working. I did not that some plugins were working intermittently. That was an issue to save for the next day.

On the following day, I went back to work trying to fix various plugin issues that I or visitors had reported. In the end, I had to make some changes to tools I was using because they would no longer work. In addition, there was one specific plugin that was many other plugins to fail (depending on page load order). Once I removed that plugin, it was safe to say that the site was back up and running as expected.

 

The Wrap

TSQL2sDay150x150I did not enjoy this journey much at all. Much of the experience was due to outside forces. I can’t do much to control them, but I can do something to fix the net effect of what they caused. While the journey was rather unpleasant, I did finally achieve that Nirvana moment that comes from having conquered the problem.

I didn’t go into extreme technical details because there were many fixes along the path and my focus was on getting it working though I should have been documenting it all along the way. Despite the heat, steam and fire of the experience, I did rather enjoy being able to use some new mysql skills to alter data and fix various problems in the site.

 

Common Tempdb Trace Flags – Back to Basics

Once in a while I come across something that sounds fun or interesting and decide to dive a little deeper into it. That happened to me recently and caused me to preempt my scheduled post and work on writing up something entirely different. Why? Because this seemed like fun and useful.

So what is it I am yammering on about that was fun?

I think we can probably concede that there are some best practices flying around in regards to the configuration of tempdb. One of those best practices is in regards to two trace flags within SQL Server. These trace flags are 1117 and 1118. Here is a little bit of background on the trace flags and what they do.

A caveat I have now for the use of trace flags is that I err on the same side as Kendra (author of the article just mentioned). I don’t generally like to enable trace flags unless it is very warranted for a very specific condition. As Kendra mentions, TF 1117 will impact more than just the tempdb data files. So use that one with caution.

Ancient Artifacts

With the release of SQL Server 2016, these trace flags were rumored to be a thing of the past and hence completely unnecessary. That is partially true. The trace flag is unneeded and SQL 2016 does have some different behaviors, but does that mean you have to do nothing to get the benefits of these Trace Flags as implemented in 2016?

As it turns out, these trace flags no longer do what they did in previous editions. SQL Server now pretty much has it baked into the product. Buuuuut, do you have to do anything slightly different to make it work? This was something I came across while reading this post and wanted to double check everything. After all, I was also under the belief that it was automatically enabled. So let’s create a script that checks these things for me.

Holy cannoli batman – that is more than a simple script, right? Well, it may be a bit of overkill. I wanted it to work for version before and after and including SQL Server 2016 (when these sweeping changes went into effect). You see, I am checking for versions where the TF was required to make the change and also for versions after the change where the TF has no effect. In 2016 and later, these settings are database scoped and the TF is unnecessary.

The database scoped settings can actually be queried in 2016 more specifically with the following query.

In this query, I am able to determine if mixed_page_allocations and if is_autogrow_all_files are enabled. These settings can be retrieved from sys.databases and sys.filegroups respectively. If I run this query on a server where the defaults were accepted during the install, I would see something like the following.

You can see here, the default settings on my install show something different than the reported behavior. While autogrow all files is enabled, mixed_page_allocations is disabled. This matches what we expect to see by enabling the Trace Flags 1117 and 1118 – for the tempdb database at least. If I look at a user database, I will find that mixed pages is disabled by default still but that autogrow_all_files is also disabled.

In this case, you may or may not want a user database to have all data files grow at the same time. That is a great change to have implemented in SQL Server with SQL 2016. Should you choose to enable it, you can do so on a database by database basis.

As for the trace flags? My query checks to see if maybe you enabled them on your instance or if you don’t have them enabled for the older versions of SQL Server. Then the script generates the appropriate action scripts and allows you to determine if you want to run the generated script or not. And since we are changing trace flags (potentially) I recommend that you also look at this article of mine that discusses how to audit the changing of trace flags. And since that is an XEvent based article, I recommend freshening up on XEvents with this series too!

The Wrap

In this article I have introduced you to some basics in regards to default behaviors and settings in tempdb along with some best practices. It is advisable to investigate from time to time some of these recommendations and confirm what we are really being told so we can avoid confusion and mis-interpretation.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Changing Default Logs Directory – Back to Basics

Every now and then I find a topic that seems to fit perfectly into the mold of the theme of “Back to Basics”. A couple of years ago, there was a challenge to write a series of posts about basic concepts. Some of my articles in that series can be found here.

Today, my topic to discuss is in regards to altering the default logs directory location. Some may say this is no big deal and you can just use the default location used during install. Fair enough, there may not be massive need to change that location.

Maybe, just maybe, there is an overarching need to change this default. Maybe you have multiple versions of SQL Server in the enterprise and just want a consistent folder to access across all servers so you don’t have to think too much. Or possibly, you want to copy the logs from multiple servers to a common location on a central server and don’t want to have to code for a different directory on each server.

The list of reasons can go on and I am certain I would not be able to list all of the really good reasons to change this particular default. Suffice it to say, there are some really good requirements out there (and probably some really bad ones too) that mandate the changing of the default logs directory to a new standardized location.

Changes

The logs that I am referring to are not the transaction logs for the databases – oh no no no! Rather, I am referring to the error logs, the mini dumps, and the many other logs that may fall into the traditional “logs” folder during the SQL Server install. Let’s take a peek at a default log directory after the install is complete.

I picked a demo server that has a crap load of stuff available (and yeah not so fresh after install) but where the installation placed the logs by default. You can see I have traces, default XE files, some SQL logs, and some dump files. There is plenty going on with this server. A very fresh install would have similar files but not quite as many.

If I want to change the Log directory, it is a pretty easy change but it does require a service restart.

In SQL Server Configuration Manager, navigate to services then to “SQL Server Service”. Right click that service and select properties. From properties, you will need to select the “Startup Parameters” tab. Select the parameter with the “-e” and errorlog in the path. Then you can modify the path to something more appropriate for your needs and then simply click the update button. After doing that, click the ok button and bounce the SQL Service.

After you successfully bounce the service, you can confirm that the error logs have been migrated to the correct folder with a simple check. Note that this change impacts the errorlogs, the default Extended Events logging directory, the default trace directory, the dumps directory and many other things.

See how easy that was? Did that move everything over for us? As it turns out, it did not. The old directory will continue to have the SQL Agent logs. We can see this with a check from the Agent log properties like the following.

To change this, I can execute a simple stored procedure in the msdb database and then bounce the sql agent service.

With the agent logs now writing to the directory verified after agent service restart as shown here.

At this point, all that will be left in the previous folder will be the files that were written prior to the folder changes and the service restarts.

The Wrap

In this article I have introduced you to an easy method to move the logs for SQL Server and the SQL Server Agent to a custom directory that better suits your enterprise needs. This concept is a basic building block for some upcoming articles – stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Sharepoint Diagnostics and XE

Comments: 2 Comments
Published on: January 3, 2018

One of the all-time greatest and most beloved applications among DBAs happens to be Sharepoint. Most of us would be lying if we said that we loved Sharepoint and the kind of performance issues it can cause on a SQL Server.

When you have an application that comes with a notoriety of causing performance pain, you will want to have some tools or find some tools that can help you find the pain points. Lucky for the savvy DBA, there are tools to be able to capture this kind of information. Hint: some of those tools can be found in this series.

Tools

One of the best tools for capturing performance impacting data is Extended Events. Extended Events is simply a tool that everybody needs to learn. And in this case, it provides an opportunity to support and troubleshoot Sharepoint as well as SQL Server.

Truth be told, the default XE session for Sharepoint does not come installed when you install Sharepoint. But, there is a session that gets installed when you configure SPDiag for your Sharepoint needs. I am not going to dive deeply into that tool or the Sharepoint tooling. That would be well outside the use and scope of this post. Here is the default session created by that tool.

As you can see, there is not much special about the session. This is a basic session to capture statements queries executed against the server. In fact, you will probably note it is similar to both of the default XE Profiler sessions I discussed here. The first caveat here is the application filter for Sharepoint apps. That said, if you are running multiple Sharepoint sites and configure SPDiag on both, and both share the same backend database server, guess what happens? Yup, you guessed it – you get two Sharepoint sessions that capture the exact same data.

The second caveat here is a fun thing about the service that gets created to support the Sharepoint XE Sessions. The SPDiag will alter each of your XE (sharepoint related) sessions many times every few seconds. Every alter of the sessions will be to either stop or restart the session. This seems rather unnecessary and huge miss. There are not many misses with XE, but there can be a misuse of XE similar to all of the abuses of profiler and this is one of those abuses in my opinion. You can easily discover the constant changes to these sessions via an XE session that I will be sharing in a future post. I would recommend you stay tuned for that article and also watch the 60 Day Series for an update that includes that post.

The next caveat here is that this particular session is only configured to go to the ring_buffer. What is the problem there? Well, the ring_buffer means the trace data is volatile and you can easily lose it. In addition, this particular session has been known to cause some memory issues on many installations. You can check for a memory issue by watching for a growing MEMORYCLERK_XE memory count. You can watch it with a query like the following.

In my opinion, I see no good value in running the session as currently built. I would just rather build a better XE session and leave it running instead of constantly stopping and starting the XE session.

The Wrap

In this article I have introduced you to a quick session setup that comes from using the SPDiag tool that could plausibly be useful in the troubleshooting of various different problems (most probably performance related) with Sharepoint. This assuredly is in the absence of a better alternative such as a custom session that you write yourself.

If we browse the data made available to us, we can see that there is adequate info to help us convert all of those SQL Traces to the high-performing XE versions that we should be using these days. I encourage you to browse the capabilities of XE as you work towards converting your profiler based traces to the much friendlier and more powerful counterparts.

If you are interested in learning more about Extended Events, I recommend you read my “60 day” series of articles on Extended Events. The series continues to grow and covers a pretty decent depth and breadth on the topic.

Dynamics AX Event Session

Many moons ago, I wrote about how to enable the context info for Dynamics AX. Having the context info enabled is a pretty important thing when dealing with Dynamics AX. The amount of time you can save yourself and the DAX Admins is astounding – if you have the context info enabled. If you are curious, read more about that here.

It is not an uncommon thing to need to know who is causing performance problems within AX. In addition, you will probably want to know what they are doing. But since that information is obfuscated due to the default setup of AX, you can find yourself beating your head against the desk in frustration.

Once you have the context info enabled, however, how are you going to trap that information? How are you going to figure out what queries are related to the context info passed from AX to the database? In addition, how are you going to parse that context info into something useful? Hint: some of those answers can be found in the aforementioned article – here.

Context

While the previous article goes into a good amount of detail in answering some of those questions, it does not divulge what to do to capture the context info. Capturing the context info is rather easy to do – if you choose to use Extended Events (as I prefer to do). And since this is the giving season, why not just give the session details and then let you play with the shiny new trace for a few days?

All in all, this is not a tremendously difficult session. The session is looking for statement completed events. When there is a qualifying event that fires, then I attach the additional payload data from the several actions (e.g. sql_text, database_id, server_principal_name and context_info) to the event payload so I can query it later.

Will I really need all of those actions? Not necessarily but the data can be useful here or there – especially if you are trying to familiarize yourself with a new DAX environment.

The Wrap

There are many many uses for Extended Events. In this article I show a quick session setup that can be useful in the troubleshooting of various different problems (most probably performance related) with Dynamics AX.

If we browse the data made available to us, we can see that there is adequate info to help us convert all of those SQL Traces to the high-performing XE versions that we should be using these days. I encourage you to browse the capabilities of XE as you work towards converting your profiler based traces to the much friendlier and more powerful counterparts.

If you are interested in learning more about Extended Events, I recommend you read my “60 day” series of articles on Extended Events. The series continues to grow and covers a pretty decent depth and breadth on the topic.

Correlate SQL Trace and Actions

Comments: No Comments
Published on: January 1, 2018

I have recently written about the pains of correlating SQL Trace events to Extended Events (XE) events. That article can be found here. At that time, I did not dive into another facet of trace that truly needs converting as well. That component is known as “Actions” in the world of XE.

In the world of profiler and SQL Trace, these actions were an actual part of the payload for the Trace event. In XE, these actions are more like add-ons. I go into further depth about what an action in XE is – here. I recommend reading that article. Due to that, I won’t be diving into great detail here about what an Action is.

Actions

That said, there is one thing in regards to actions in XE that I think would be helpful in translating Traces to XE. In XE, we sometimes see an action referenced as a global field. This is in stark contrast to what the Trace equivalent would be – a local field. Rough translation is that the actions in Trace were localized to every event that needed that particular data. In XE, the field has become more like a function and can be applied across many different events without being tied to every event specifically. Thus – it is renamed to “global field” in many different places.

Knowing that we migrate from localized fields to global fields is helpful in trying to map the old world of SQL Trace to the new and improved world of XE. In addition to that, we have some objects to introduce that will help us translate. Those objects are:

  • sys.trace_event_bindings – a catalog view introduced in 2005 that contains a list of all possible usage combinations of events and columns.
  • sys.trace_columns – a catalog view introduced in 2008 which contains a list of all trace event columns.
  • sys.trace_xe_action_map – a table that contains a row for each XE action that is mapped to a SQL Trace Column.

I will use the data from these objects to map to the XE actions – with a few caveats. In order to map everything properly, each of these objects is required because they have just a tiny bit of the needed information in order to eventually get to the big picture. When I bring it all together, I will have a script something like this.

You may have noticed here that I am excluding some results. I am not interested in anything that does not map to an action in the world of XE. So I am chucking those empty action results to the side. In addition, I am tossing anything that is Audit related. If it is Audit related, I really need to be using the SQL Audit feature (though it is built fundamentally on XE) instead of XE. So anything that is SQL Audit related just needs to be discarded from these results.

This will yield a whopping 2145 actions that are mapped between the local SQL Trace fields and the global XE fields called actions.

If I contrast that to a count of available actions in XE, you will likely see that there is a significant difference and that a lot of inefficiency has been removed.

Depending on version of SQL Server you will get a slightly different result. Here is a quick cheat sheet I worked up for different versions.

As you can see from the image, none of the versions of SQL Server with XE exceed 55 actions (global fields). Compare that to over 2000 results in the old trace world with only 180 events, and I think you would agree it was highly inefficient.

The Wrap

If we browse the data made available to us, we can see that there is adequate info to help us convert all of those SQL Traces to the high-performing XE versions that we should be using these days. I encourage you to browse the capabilities of XE as you work towards converting your profiler based traces to the much friendlier and more powerful counterparts.

If you are interested in learning more about Extended Events, I recommend you read my “60 day” series of articles on Extended Events. The series continues to grow and covers a pretty decent depth and breadth on the topic.

Failed to Create the Audit File

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 31, 2017

One day while checking things for clients, I happened across a fun little error message – “SQL Server Audit failed to create the audit file“. It just so happens that the audit had been working and then suddenly stopped and started flooding the error logs with this message.

Why would it suddenly stop working? Well, it says in the error that the disk might be full or that there may be a permissions issue. So, at least there are some possibilities provided by the message. Granted – neither of these options is very settling for a DBA.

FAILED

While reading the first occurrence of this error message, you notice that the log is bombarded with 20 more messages of the same nature. Before you can even scroll to catch up with the error messages, another 30 have appeared. It looks like the server is starting to get busier with the business users starting to run through their daily routines. You need a fix and you need it quick. You copy and paste the error to another screen for reference and then close the log to remove that distraction. Here is a copy of that error message.

SQL Server Audit failed to create the audit file ‘C:\Database\XE\DBA_Server_Audit_906B13C3-8F3F-4CFC-A391-20C5F7CAD698.sqlaudit’. Make sure that the disk is not full and that the SQL Server service account has the required permissions to create and write to the file.

Let’s try the suggestions from the error message starting with security. The SQL Server Service account needs to have permissions to the directory where the audit is being stored. Looking in that directory, you can see that the audit was obviously successful at one point because there are audit files in the directory. In addition, the audit just barely stopped working and you are certain nothing has changed.

Regardless of that, you proceed to investigate the permissions settings on the directory. Looking at the directory permissions, you are able to fully confirm that the service account does indeed have adequate permissions to the folder.

So we can rule out the permissions having changed as being a viable contender for causing this problem. As you start to proceed to investigate the next option, you start to worry that users are being prevented from doing their jobs because of the flood of errors. For sanity sake, you run a few quick checks to verify things look normal on the server from an activity standpoint. You also check your ticket queues and find there is nothing alarming in there.

Phwew!

Still Trying

After checking the ticket queues and server activity, you bounce right back to your next check point – disk space. This is an easier check than the permissions. You have no mount points and you can verify the disk space with a quick glance in windows explorer. You look in windows explorer and can see that your C drive where the audits are being stored has 50{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170} free space (or roughly 200GB).

Well, that is obviously not the problem either. You know the audit was working as recently as 30 minutes before you started troubleshooting and the errors did not start until almost immediately before you started checking the problem. What could it be? Afterall, you have 35 audit files in the audit folder for the trace. Then, suddenly, it hits you. There are 35 files. The trace was configured for 35 files with no rollover.

In this case, the easy fix is to move a bunch of files to an archive folder. As soon as that is done, a new message will appear in the error logs:

Message
Audit: Server Audit: 65536, State changed from: TARGET_CREATION_FAILED to: STARTED

Now to go change the audit process to make it a little more robust.

When dealing with SQL Audit, max_files is an important setting. Here is what msdn has to say about the setting.

MAX_FILES =integer

Specifies the maximum number of audit files that can be created. Does not rollover to the first file when the limit is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated will fail with an error.

This article is just one of several audit related articles on this blog. You can read more about some of the different ways to audit along with different perspectives from this link.

Wrap

Auditing is a necessity but it doesn’t need to be alarming or scary. Sometimes, we can become a little bit alarmed when an error occurs. We just need to keep our cool and trust our skills and abilities to troubleshoot in the event an audit fails. This article will hopefully show some of that process and help to provide a cool demeanor.

An Introduction to Templates

Comments: No Comments
Published on: December 30, 2017

Templates are a powerful tool in so many trades and crafts. From decals and stickers all the way up to the largest cruise ships in the world, templates can be found everywhere in just about everything that do.

In SQL Server, templates are readily available for your use in so many different ways that I am sure we are unaware of most of them.

Templates

A good example of a template in SQL Server, that we clamor about frequently, is the unattended install. I am sure you have probably used that kind of template as you work to save yourself some hair and brain cells trying to remember the exact setup needed for your environment. With an unattended install, you build it once and then use the script to build all future installs.

Even though we may not call the unattended install script a template, it is. Think about the definition(s) of a template for a minute:

  1. something that serves as a model for others to copy.
  2. a preset format for a document or file, used so that the format does not have to be recreated each time it is used.

Looking at this definition, we can easily see that the unattended install routine fits perfectly as a “template”. This is just an example of the availability of templates within SQL Server. There are many different kinds of templates – many we probably don’t ever think about as being “templates” and many that we just flat out don’t bother to think about at all.

I have one specific kind of template in mind that I want to introduce and discuss. This template type comes with Extended Events. I actually introduced the concept very briefly in a recent post about the XE Profiler feature. That article is just one of many articles about Extended Events that I have written. You are welcome to catch up on the series of articles here.

XE

Extended Events has had templates available for quite some time. If you recall from the article about XE Profiler, templates were not introduced as a part of the XE Profiler feature – the templates were introduced quite some time prior to that.

There are really three distinct areas of focus for templates that I would like to discuss. I have them somewhat illustrated in this next screenshot.

In this image, you will see that I have three color coded regions. In green, I have the option to create a template from file. In blue, I have the XE specific templates that have no correlation to profiler. And then in red I have the templates that were created to simulate the behavior of the Profiler based templates while providing some sort of comfort zone for the profiler fanatics out there.

Today, I will focus on the templates that are readily available (red and blue zones). In a future article, I will cover the creation of a custom template (green zone). That said, this is not going to be a traditional article with the technical geekery showing the ins and outs of all of those templates. Oh no, this article will be an entirely different flavor of geekery. Today, I will be focusing on something that is more about fun. I will be exploring the available templates strictly via script (no XE GUI).

Exploratory Surgery

If you are like me, you sometimes just don’t care to pick up the mouse to browse the file system or even browse the SSMS hierarchy to find the exact screen that has some specific information that you want. For me, I ran into this when thinking about the templates in XE. What templates are there in XE again? Crap, now I have to open the GUI to figure that out. I wanted a quick way to be able to look at all of the templates – and not just the templates available to me in the version of SSMS that I am using, I want to see everything out there on the system.

The question is how to do this. The templates are not listed in any system table or view that I could find. The list of templates is specific to the version of SSMS that you are using and the templates are all xml files stored in a directory on the OS. A thought occurred to me while contemplating this dilemma – can I maybe use the registry?

The registry does give me some options so maybe, I can use xp_regread. I won’t be able to use xp_instance_regread since this is not a registry key that appears at the instance level, rather it is an SSMS related reg string. One problem that comes to mind is that I would need to find the SQLPath for each of the versions of SSMS installed (red arrows indicate each of the registry strings I might have to query). I start to think to myself after realizing this that I do not like the looks of this option – but it could be done.

I then thought of another option before going too far down the registry route. When I install an application, that application may modify my “Path” environment variable for the OS. This sounds a little promising. I still have some of the drawback with multiple versions of SSMS installed – but I think I can manage that more easily from the “Path” environment variable.

Let’s take a crack at a script.

Ouch, my eyes hurt already! What have I done here? I have written multiple loops. Never fear! Loops are not all evil. Sometimes, it makes sense to use a looping mechanism. This is one of those times where it makes some sense. This script is a limited use script for the eyes of the DBA only.

I can hear the moans about the use of xp_cmdshell and xp_dirtree and so forth. I am not listening to those grumbles. Again, this script is for fun and on my machine. The discussion on securing the server and xp_cmdshell is a discussion for another time.

In this script, I use the path variable to determine all of the flavors of SSMS that have been installed. Since the path for SSMS is all the way to the BINN folder, I need to do a little replace to get the correct paths for the xe templates. I also added a bit of a union in there because I had to duplicate the results – sorta. You see there are templates for AzureDB that, depending on your Azure Subscription or lack thereof, you may never see. I wanted to include those in my result set!

Looking at these results on a machine with SSMS 14 and SSMS 17, I would see something like the following.

I highlighted a few different sections of the results to help quickly show some of the templates available between different SSMS versions as well as those for Azure DB versus a local instance of SQL Server. Things are a tad different if we have SSMS 16 installed. Here is a sample result from a machine with SSMS 16 installed.

The main point of interest here (at least for me) is the removal of a template from SSMS 16 to SSMS 17. This is probably a template you don’t necessarily want to run on a production system and that is likely why it is gone from the available templates that get installed.

The Wrap

I am sure you can probably find an alternate routine to query these templates via TSQL that may be more efficient. I would be interested to hear about it.

What’s next now that I can get these templates? Well, a thought occurred to me to use this type of routine to create a means to use TSQL to read one of those template files to create the XE session from pure TSQL statements and eliminate the need to use the GUI. I think that would be a lot of TSQL/XE Geekery based fun.

Audit Domain Group and User Permissions

Comments: 3 Comments
Published on: December 29, 2017

No matter how simple the task or how versed we are with doing a security audit, it seems like we can always stand to learn just a little bit more. No matter how many times we hand an audit report over to the auditor, there is always “just one” more report we have to provide.

I know it seems like I am almost always running some sort of audit report. Whether it is to audit which user may have changed something or what access a user may have overall to the system, there is always a need for more audit.

One of the aspects of an audit that I have not yet written about is with regards to Domain Groups. It is a very common practice to grant access to SQL Server via Domain Group membership. We can easily see which groups have which access, but how often do we try to figure out who is in the group?

When you do try to figure out who is in a Domain Group, do you do like so many DBAs that I know and yell across a cubicle wall to the AD guys and ask them to look it up? Maybe you fire off an email to the AD Admins and ask them to do the grunt work for you. Nobody will shame you for doing either of those two methods. I mean, most people probably toss it over the fence in one way or another.

Fence

 

It is not a difficult stretch to imagine asking somebody who is an expert at a technology to see if they could get a quick answer to your question for you. That said, it is also really nice when you can service those types of questions by yourself and thus save everybody a little time and effort. Luckily, there is at least one easy way for the SQL Server professional to try and answer questions about group membership.

Microsoft has included a method for the DBA to try and capture information about AD related accounts. This method comes in the form of an extended procedure called xp_logininfo. This procedure can tell me the members of domain groups, the access level the account has, and also validate if the account is still active in AD.

Suppose you needed to determine all of the accounts that may have access to SQL Server, via direct Login or via group membership. In addition, you must find out which of the accounts are no longer valid in AD. I have a script that will do just that by first perusing the groups and then cycling through each of the User accounts produced from the group perusal.

Let’s take a look at that script.

You will see here that I have a couple of cursors built into this script. This is helping me cycle through first the groups and then the users. As I cycle through each user, I am able to determine the validity of the account. Not only will I know if an AD account is dead and needs to be removed from SQL Server, but I will also know if a Domain Group no longer exists.

This little script has proven to be a major time saver with some clients. Not only has it saved me some time, but it has also helped to figure out how a developer was able to perform sysadmin functions on a prod box even though his account did not have sysadmin access granted explicitly.

Correlate Trace and XE Events

Comments: No Comments
Published on: December 28, 2017

Several years back, when Extended Events was brand new, it was downright painful to try and convert the classic Profiler or Server Side trace to something meaningful and useful within Extended Events.

That was years ago and really is ancient history now. Microsoft has provided a couple of maps to help us translate the petroglyphs of Profiler into the high tech, smooth running, efficient engine and language of XEvents. This article is going to demonstrate a quick correlation between events in these two tools.

Maps

If you have browsed XEvents to any extent you should probably be familiar with at least one map object that Microsoft has given us. That particular object is sys.dm_xe_map_values and I wrote a bit about it here. That object presents some good information about various data values within XEvents that are delivered with the Event payload as well as some hidden gems that can be extra useful for exploring SQL Server internals. The point is, maps is not necessarily a new concept.

While the concept of a map is not new within XEvents, the implementation in this case is a tad different. You see, the dm_xe_map_values object is more like an EAV object while the map I will introduce today is more of an ordinary lookup table. Let’s take a look.

The map I want to concentrate on for now is the sys.trace_xe_event_map table. You read that correctly – it is a table. We don’t have very many “tables” when dealing with XEvents, but this one happens to be. As Microsoft Docs describes it, the table “contains one row for each Extended Events event that is mapped to a SQL Trace event class.

And to show that it is indeed a table, you can run a check like the following.

So this table contains a mapping between Extended Event events and SQL Trace events. If we are curious to see that mapping, we can do a simple query and see the trace_event_id and the XE event name along with the XE package that is related to that specific event. That said, most of us do not have the trace_event_id memorized to human friendly terms so we need to do a little bit more. That is fine because it is possible for us to also see the relationship between those trace ids and the trace event names. Let’s get a little more creative with the query.

Before doing that though, I do want to lay out some  baseline info. There are 180 trace events and of those there are 139 mappings in the map table between SQL trace and XE. This tells us immediately that there is not a one-to-one mapping. What this does not divulge just yet is that there may be a many-to-one mapping (ok, there is a single many-to-one mapping). We will look a bit at the differences in mappings.

And the results will look something like the following.

In the previous image, I highlighted three areas in different colors. First, note the red box surrounding the row count output in my results. Recall that I said there were only 180 events and only 139 rows in the map table. Clearly, we have a map that has a many-to-one relationship.

Next up is the blue box. The only thing we can see here is  that there is a trace event with no mapping to an Extended Event. We expected this result given there were only 139 maps. Of interest here is that most of the unmapped events are Audit related. While SQL Audit relies heavily on the XE Engine, the events from Trace do not map directly to events in XEvents.

The last call-out is the green box. This is there to show the clear mapping between trace events and XEvents events. For the most part, this has not changed clear through SQL Server 2017. If you will notice, I have included the product version, dll file name and the dll version info. That version info helps to underscore what has changed with SQL Trace which is practically nothing.

This brings us to the question on everybody’s mind: What is the many-to-one map? That is an easy answer but the explanation is not quite as easy.

Running that query, I will find that event_id 165 has more than one mapping in the map table. Let’s see what that translates to in human terms.

And this is what it looks like:

So, “Performance Statistics” happens to be the event that maps to multiple different XEvent events. That said, you may wonder why “query_pre_execution_showplan” doesn’t map to something like “Showplan XML” or maybe “Showplan All” which produce the execution plan in similar fashion. I will leave that mystery to you as your deep dive homework assignment.

The Wrap

If we browse the data made available to us, we can see that there is adequate info to help us convert all of those SQL Traces to the high-performing XE versions that we should be using these days. I encourage you to browse the capabilities of XE as you work towards converting your profiler based traces to the much friendlier and more powerful counterparts.

If you are interested in learning more about Extended Events, I recommend you read my “60 day” series of articles on Extended Events. The series continues to grow and covers a pretty decent depth and breadth on the topic.

«page 2 of 63»

Calendar
April 2018
M T W T F S S
« Mar    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Welcome , today is Monday, April 23, 2018