Easy Audit Index Changes

Comments: No Comments
Published on: December 27, 2019

In the previous article, I showed a somewhat elaborate method that could be used should there be a need to audit changes to your indexes.

As I learned many years ago in school, the elaborate or difficult method may not always be the best method. That said, we often need to learn the more elaborate methods in order to better appreciate the simpler, more efficient methods available. This article will help demonstrate a much better method.

Auditing

When looking for an easy method, one of the first technologies to try really should be Extended Events (xevents). Through the use of xevents, we have a large tool set at our disposal. Not only is the tool set rather large, it is also an extremely flexible multi-faceted tool set giving us greater potential to solve problems with simple tools.

Let’s look at how we can solve this problem through the use of xevents.

In the preceding xevent session, I am taking advantage of three events: object_created, object_altered, and object_deleted. These events will only trigger if an index definition is changed, added, or deleted. They will not trigger if the index is defragged (for instance) or the stats for the index happen to be updated (which is an issue for the stats_date method shown in this article). After deploying this session and then creating an index (shown in the following script), I can expect this session to capture a wealth of information.

In the preceding image, we can see that plenty of pertinent information has been captured. The image does not show the person who made the change, but that information is also captured with this session. Should I wish to focus this session to a specific database, that is definitely possible by removing the comment tags from the database_name predicate in the session script I have shared.

Put a bow on it

This article showed an easy method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the third article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Audit Index Changes

Categories: News, Professional, SSC
Comments: 3 Comments
Published on: December 26, 2019

In the previous article, I showed a quick and dirty method to try and capture the indexes that may have been just created. We all know that in the hectic life of the DBA, it is easy enough to forget the indexes created more than just a few hours in the past.

In that article, I mentioned that the solution provided was far from a complete solution to the entire problem. A bigger part of the picture involves audits to know when and by whom indexes were created, altered or dropped.

Auditing

On many an occasion I have had clients request help with tracking the changes that have occurred to their indexes. The reasons vary slightly but generally boil down to two large categories: 1) to better understand who is making changes and 2) to help ensure proper indexes are not removed due to code promotions (various apps like to wipe out custom indexes which causes performance issues).

With that in mind, the following provides a solution that will help track indexes that have changed (index definition) or that have been added or removed during the audit interval. The interval can be set per your specific requirements. For this specific solution, I have historically just set this to run once a day. The solution is shown below.

To make this solution work, I use several tables and a stored procedure. The stored procedure is scheduled via an agent job and scheduled to your requirements. The tables will stage all of the indexes for all of the databases, then log which have changed or have been added or removed. From there, a simple query to the IndexDefChange table will help me learn of which indexes may deserve some attention.

Put a bow on it

This article showed a method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the second article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Find Index Create Date

Categories: News, Professional, SSC
Comments: 1 Comment
Published on: December 25, 2019

Have you ever been zipping right along working on tuning a query or maybe a handful of queries. Then BAM! All of a sudden you lose the script you were working on. The script just so happened to have a few indexes that you were getting ready to create and you would like to not have to start all over again.

Sure, there are tools out there in this day and age that are supposed to help you recover those scripts. Just suppose that maybe the tool(s) don’t have a record of your script either. Are you up a creek without a paddle at this point? Are you truly forced to start all over again?

Let’s make this predicament even more unfortunate for the DBA. The DBA doesn’t recall the object upon which the indexes were to be created or that maybe upon which the indexes were already created (but the DBA needs to get those indexes to three or four more servers).

A bit Far Fetched?

Does this all seem a bit too implausible or do you know somebody that has experienced something like this? I had a DBA pose this eerily similar scenario to me because it had just happened to him. Let’s be real, crap really does happen at the most inopportune time. We just need to be prepared to deal with it and polish it in the most elegant way we know.

Today, I will show you how to potentially retrieve the indexes that were recently created and maybe, just maybe, save yourself a little time and prevent the dreaded rework. Understand, however, that this is not a bullet proof solution. It is merely a plausible method to potentially recapture the indexes recently created. I will explain in more detail shortly the issues with this method. First the script.

If I run that query on a database, I might see information such as the following shows.

Interestingly enough, all of these indexes were certainly not created on the dates shown in the LastModDate column. As it turns out, the trick I am using (Stats_Date) will give us relative dates of indexes being created but it also gives us the date of the last time the index was updated (or stat was updated). This means you may end up with a ton of indexes to try and sift through to find that index you just created (and forgot the pertinent information). So, while this method does have a use case and can help you prevent rework, it doesn’t do much good for an audit requirement. Stay tuned because the next couple of articles will discuss how to better audit your indexes and get the precise information you need. Not only will you be able to find that recent index addition, but you will also be able to assuredly know when an index was created, altered, or dropped – and by whom.

Put a bow on it

This article showed a quick script to help determine indexes that were created recently. This script will help you out of rough spot and help reduce the chance of rework. That said, you will definitely want to add a bit of auditing on indexes to your environment because while it is helpful, it will not help you in cases where maybe the index was created a few days in the past or to determine when an index may have been dropped.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the first article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Linked Servers and Stats

Linked Servers

A linked server is a fabulous feature in SQL Server to help accomplish various data tasks between local and remote servers. There is a time and a place for the use of linked servers. Unfortunately, some are of the belief that a linked server is the solution to all problems including the way to end world hunger.

You may recall that I have written about linked servers on a few occasions. Usually that is due to the relationship of a poorly performing process and the linked server. Here is a prime example here or here. On the other hand, here are some examples of using a linked server in a much better light here and here.

Using a linked server is entirely up to you. If you choose to use a linked server, you will need to understand there are risks involved – beyond just the documentation and performance nightmares it can cause. Today, I will share one of those fabulous gotchas – mostly because it has cropped up more than once with my clients.

Remote Stats

On more than one occasion I have had an emergency request because everything was broken. The everything in almost every incident is an SSIS package that is failing with error messages. The error message will typically have text similar to the following:

Could not locate statistics ‘_WA_Sys_00000015_346C780E’ in the system catalogs.

Due to the error, the package fails processing and grinds to a halt. When diving into the package it is discovered that the missing stats happen to be coming from a linked server query. This raises a big bright blaring alarm for me. Why is the SSIS package accessing the data via a linked server? This is rather counter-productive and definitely contrary to what is desired from a performance perspective.

You can certainly see why this is a sudden emergency right? Absolutely nothing is working anymore, right? Well, at least that is the way it seems for the person reporting the issue. Understanding that their realm of work has come to a halt is important in order to get to a resolution more quickly. Knowing that the point of failure is not as grande scale as claimed is critical because it can help you keep your cool while assessing the real problem.

Since the problem presents itself when accessing a specific object or even set of objects across a linked server, we have our scope of the issue scaled down quite a bit already. Now, what do we do from here? Obviously the statistic with that name is missing, right? Don’t be so hasty to assume it is actually missing. Let’s verify that the stat is indeed missing first. Even better – let’s just eliminate as many variables as possible. We can do this by querying the affected objects directly from SSMS.

Easy Troubleshooting for the DBA

For me, it is essential to eliminate variables when we run into a failure. In this case, I have just a few major variables that can potentially be causing this problem. Those variables are:

  1. SSIS
  2. Linked Server
  3. The query itself
  4. Security
  5. Bad Plan
  6. The stat is legitimately missing

I can easily assess the legitimacy of each variable through various quick tests. To eliminate or validate the “query” variable, I can issue a very simple query to retrieve data from the affected object. So let’s eliminate variables 1 and 3 in one fell swoop.

This query, in this scenario, results in the following:

This, while disappointing, is actually quite productive. This has eliminated two variables for me. I now know that SSIS is not related to the problem. I also know that query complexity is not related to the problem. There are still a few variables left to either validate or eliminate. Since I know the problem occurs when querying via linked server, let’s try querying the remote server direct (not via the linked server).

Well, while that does not entirely eliminate or validate any variables, it does tell me that the problem is still specifically related to the linked server. What if I try to use OPENQUERY instead of the traditional linked server query?

Wow, look at that? Ok, so this is a bit of trickery because I have told SQL Server to execute that query on the linked server as if it were a “local” query on that server. This does work without error and is definitely pushing the findings to be more conclusive that it is a linked server problem.

While the openquery version works, I do still need to eliminate some variables from the problem. One of the variables is security. Since the error mentions sp_table_statistics2_rowset, I googled about for that proc and found some mentions that maybe there are some column denies related to the stats in question that is giving the linked server some major fits. Or it could also be insufficient permissions to execute DBCC SHOW_Statistics. I highly doubt this to be an issue since the openquery version works while using the same pass through authentication of the linked server that the traditional linked server query would use.

In order to eliminate security as a potential cause, the test is simple (while it could be more complex, I went for the jugular to just eliminate the theory as quickly as possible) – I will add my account as the pass through account (which is a sysadmin on the remote server) and then query the linked server all over again. Suffice it to say, there was no change in the result – the error persisted.

This does not just yet eliminate the security variable because there could be a cached plan somewhere. So, just to be sure, I chose to flush the cache on both the linked server and the local server. Running the simple “Select *” query all over again yielded no difference in the query results. I can now safely say that the problem is not related to a cached plan nor is it related to the security. At this point, I set the linked server security back to the way it was. I have effectively eliminated all variables but 2 and 6 (linked server and the stat is missing).

Let’s eliminate the missing stat variable right now.

Undoubtedly you have noticed that I built a drop statement into the result set from this particular query. That aside, the green highlighted row is the very stat that was producing the error. This stat is most certainly available on the source server (linked server). This entirely eliminates the sixth variable because the stat is not missing.

This brings us to the final step – see what happens when we drop the stat and try the query again. I have the drop statement ready to roll, so let’s just run that and see what happens. My risk here is very small. This is an auto-generated stat and will be recreated if needed. After dropping the stat, I run the simple “Select *” query across the linked server again and guess what? It works perfectly fine.

In my opinion, we can just skip a few of these steps if the problem is on a system generated stat and just go straight to dropping the stat and trying again.

That said, the better solution in this case would be to do one of two things.

  1. Modify the Package to not use the linked servers in this way and create a data source connection for each of the different server connections.
  2. Modify the Package to use OPENQUERY style queries.

My preference between those two options would be to create specific data sources. It seems very silly to pull data across the linked server to then be consumed via SSIS package.

I have just illustrated what I would classify as a nuisance problem related to linked servers. Simplifying the approach as I described with the two potential solutions would alleviate this nuisance while also giving an inevitable performance boost.

Parting thought: One observation of the “fix” that involves dropping the stats is that on busy systems it is a very short lived kluge. I have seen the problem re-surface within five minutes on busy systems. The stats are not really a problem, it is just how the linked server connection interfaces with the stats.

page 1 of 1

Calendar
January 2020
M T W T F S S
« Dec    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Friday, January 24, 2020