Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang="tsql"]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

[codesyntax lang="tsql"]

[/codesyntax]

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

Supported Compatibility Levels in SQL Server

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 21, 2014

It has been well documented and is well known that SQL Server supports certain older versions of SQL Server in a compatibility mode.  This setting is something that can be configured on the database properties level.  You can quickly change to an older compatibility level or revert the change to a newer compatibility level.

Changing the compatibility level is sometimes necessary.  Knowing what compatibility modes are available for each database is also somewhat necessary.  The common rule of thumb has been the current version and two prior versions.  But even with that, sometimes it is warm and fuzzy to be able to see the supported versions in some sort of format other than through the GUI for database properties.

Sure, one could go and check Books Online.  You can find that information there.  Or you could fire up this script and run with the output (as a guideline).

[codesyntax lang="tsql"]

[/codesyntax]

This script will return results such as the following.

Picture0002

And if we wanted to see the results for a SQL Server 2014 installation, we would see the following.

Picture0003

The output is displayed in the same format you might see it if you were to use the Database Properties GUI.  That said, if you are using the GUI in SQL Server 2014, you might run into the following.

Picture0006

Notice the additional compatibility level for SQL 2005?  If you check the documentation, you will probably find that compatibility level 90 is not supported in SQL 2014.  In fact it says that if a database is in 90 compatibility, it will be upgraded to 100 automatically (SQL 2008).  You can find all of that and more here.

If you tried to select compatibility 90, you might end up with an error.  If you are on 2014 CTP2, you will probably be able to change the compat level without error.

Anyway, this is the message you might see when trying to change to compatibility 90.

Picture0005

They sometimes say that “seeing is believing.”  Well in this case, you may be seeing a compatibility level in the 2014 GUI that just isn’t valid.  Keep that in mind when using the GUI or trying to change compatibility modes.

Table Space updated again

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: March 28, 2014

Today we have another update for an age old script on this site.  You can find the last update to the script here.

This time we have a bit of a bug update.  The details of that bug are in the notes for the script.

[codesyntax lang="tsql"]

[/codesyntax]

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

[codesyntax lang="tsql"]

[/codesyntax]

T-SQL Tuesday #45 Follow the Yellow Brick Road

TSQL2sDay150x150

We have the distinct pleasure once again to have a party full of SQL enthusiasts.  The best part is that few if any of us are even in the same room or building for this party.

The party is known as TSQL Tuesday and this month is hosted by Mickey Stuewe (Blog | Twitter).  This month Mickey has come up with the notion that for some reason or another you may want to have something audited, tracked, traced or otherwise followed.  To quote Mickey from the invitation (which you can read here): “So, it is time to follow Dorothy and Toto down the yellow brick road and to share your experience with auditing data.”

Mickey did provide some viable options for auditing.  And I didn’t see any restrictions on what the topic of auditing couldn’t be.  So I have a slightly different spin on the topic than what appeared to be the direction that Mickey was headed with her examples and description in the invite.

That said, I will be discussing a topic along the vein of “finding out who did something stupid last week.”  And the TSQL Tuesday topic once again works out well this month for me because I have a back log of blog topics for this subject that I have been meaning to write.  So let’s get at least one of them knocked out of the way with this entry into the blog party of the month.

OzcurtBehold the Great and Powerful Audit

Have you ever come into work and been blasted first thing with “everything is slow!”

Then you start to dig and see your email reports and you begin to wonder why you have 50% of your clustered indexes jacked up to 99% fragmentation?  You know that sort of thing never happens suddenly in your environment because you have your finger on the pulse of all things DB related – or so you thought.

You check your index defrag log and see that it ran perfectly fine last night and you even see the report on fragmentation from after the defrag process was completed.  You begin to really scratch your head.  You know that you have no process that does any bulk loads or ETL batches after the index defrag process.  Nothing is really making any sense.  Then a lightbulb goes off and you check your report on database sizes.  You see that the database is now several hundred GB smaller and there is no free space left in the data files.  Free space that you had left in the data file intentionally.

300px-Steam_eruptionNow you are steamed and about to blow your top like this geyser.  Who has done something so stupid as to mess around with your data files?  We Senior DBA’s are somewhat territorial with the databases we support and we don’t like people making changes they haven’t discussed with us.  So it is perfectly understandable if you are a bit steamed by an event such as this.

The question now is: What are you going to do about it?  How are you going to find who jacked up your data files?

The answer is in the default trace.

[codesyntax lang="tsql"]

[/codesyntax]

The optimal solution may not be to have to rely on the default trace to capture an event that occurred in the past, but rather to proactively monitor it through other means (perhaps an extended event for example).  But in the event you need to capture the information because you were caught off-guard, you may be able to trap the information from the default trace.

When I run the above query, I get the following results (yes I intentionally shrunk my msdb database explicitly for this example).

results

Now when you have a sysadmin or a junior dba or that manager with too many permissions doing things they shouldn’t be doing, you have some evidence to show them.  You also can go to the person that “did something really stupid” and offer them a chance to learn why it might have been a bad thing – but do it nicely.  You don’t want to be the one that looks like a jerk.

Lost that Single-User Connection?

You have changed your database to single_user mode to do a task.  As you go about your business, you lose track of which connection was the single_user connection.  You start closing connections and mistakenly close the session that was your single_user connection.  Now you are unable to start a new single_user session.  It would seem that somebody or something has taken your connection.

Today, I am going to discuss some things you may do to get around this problem.

The first thing that may come to mind when you encounter this is “Oh crap!”  Well, no need to get too terribly worried (not unless you really hosed something up and you are trying to fix it real quick before the boss notices).

The next thing you may think of trying is how to circumvent the single_user mode.  And during that thought process you may be thinking that single_user does not really mean single_user so you might try something like start a DAC session.  Well, let’s go through that and see what would happen in a DAC session if your single_user session is stolen.

I am going to skip the part of setting a database into single_user mode because we are presuming that the condition already exists.  To start a DAC session, I am going to point you to a previous article I did on the topic – here.

To ensure I am using a DAC session, I am going to issue the following query.  This will ensure I am in the right session and that DAC is in use.

[codesyntax lang="tsql"]

[/codesyntax]

In my case, this results in an endpoint with the name “Dedicated Admin Connection” and a spid of 84.  Good, I am in the correct session for the rest of this test.  Next, I will issue a Use database statement.  I have created a test database called ClinicDB.  So I will issue the following.

[codesyntax lang="tsql"]

[/codesyntax]

I get the following result.

[codesyntax lang="tsql"]

[/codesyntax]

So, that blows that idea right out of the water.  It shouldn’t really have been a consideration in the first place because single_user really means just that – single_user.

Now What?

Well, what do you think we could do now to circumvent this little problem and get that single_user session back?

That requires a little investigative work.  It is time to find out who has taken the single_user session and politely ask them to give it up.  To make that task a little easier, we could modify the previous query to find out who has that single_user session (thus limiting how many people we have to ask).  I have modified the following query to use sys.sysprocesses so I could limit the results to the ClinicDB.  This is a limitation of SQL 2008 R2 and older versions.  Getting the database reliably means using sysprocesses.  Despite the database_id being available in other related DMVs, it’s just not that easy.  One would think you could use sys.dm_exec_requests.  But if a request is not active, an entry won’t exist for that session.  This problem is fixed in SQL 2012 since the sys.dm_exec_connections DMV now has the database_id field.  Enough of that birdwalk and on to the query.

[codesyntax lang="tsql"]

[/codesyntax]

I chose not to do an entirely new query to simply demonstrate that it was possible with a very small tweak to what has been already used.

Now that you know (in my case I can see that I have a session open with ID = 80 that is connected to that single_user database), I can walk over to the person (knowing his/her login id and computer name) and politely ask them to disconnect.

In the end, this is really an easy thing to resolve.  Sure it may take some people skills – but that doesn’t make the task too terribly difficult.  Next time this happens to you, just remember you can run a quick query to find who has sniped that single_user session.

Audit Configuration Changes

Do you know the last time a Server Property was changed on your instances of SQL Server?

Are you wondering when the setting for max degree of parallelism was changed?

Do you know who changed the setting?

In some environments there are a lot of hands trying to help mix the pot.  Sometimes more hands can make light work.  This is not always the case though.  More hands in the mix can be a troublesome thing.  Especially when things start changing and the finger pointing starts but nobody really knows for sure who made the change or when the change was made.

I know, that is why there is this concept called change management.  If you make a change to a setting, it should be documented, approved and communicated.  Unfortunately the process does not always dictate the work flow and it may be circumvented.  This is why auditing is a tool that is in place and should be in place – like it or not.

Auditing can be a very good tool.  Like many things – too much of a good thing is not a good thing.  Too much auditing can be more of a hindrance than help.  You don’t want to cause interference by auditing too many things.  You also don’t want too much data that the facts get blurred.  I hope that this script strikes more of a balance with just the right amount of data being captured to be of use.

The basic principle to auditing server configs is to find what values changes, when they were changed and by whom.  There are ancillary details that can be helpful in tracking the source of the change such as the hostname of the source computer making the change.  These are all things that we should capture.  But if a setting hasn’t changed – then we need not necessarily report that the setting was unchanged (it should go without saying).

So for this, I created a couple of tables and then a script that I can add to a job to run on a regular basis.  I can put the script in a stored procedure should I desire.  I’ll leave that to you as an exercise to perform.

[codesyntax lang="tsql"]

[/codesyntax]

Here I am trapping the config settings on a daily basis (as the script is written for now). I then cross reference the current settings against the previous settings.  Then I check the default trace to see if any of those settings exist in the trace file.

The default trace captures the data related to configuration changes.  On busy systems, it is still possible for these settings to roll out of the trace files.  For those systems, we may need to make some alterations to the script (such as running more frequently and changing the comparisons to account for smaller time intervals than a day break).

To ensure proper comparison between the audit tables and the trace file, note the substring function employed.  I can capture the configuration name and then join to the audit tables on configuration name.

This has proven useful to me so far in tracking who did what to which setting and when they did it.

I hope you will find good use for it as well.

SSRS Subscriptions Report

As a part of my series leading up to Christmas 2012, I shared a script to Report on SSRS Subscriptions.  It was soon found to have a bug with SQL Server 2008 R2 SP2.  IN the comments on that post, I promised to post an updated script.  Here is that update – without the bug.

[codesyntax lang="tsql"]

[/codesyntax]

The inline code seen above likes to reformat and and will throw an error due to capitalization of the function value and text().  Download the script here: SSRS_SubscriptionsV1_5

On the Ninth Day…

It’s the end of the world as we know it.  And as the song goes…I feel fine!  But hey, we didn’t start the fire.

Those are a couple of songs that pop into my head every time somebody starts talking doomsday and doomsday prophecies.

If you are reading this, I dare say that the world is still turning.  And that is a good thing because that gives us a chance to talk about the 12 days of pre-Christmas.

Today we will be talking about a tool that can be at the DBAs disposal to help in tracking performance as well as problems.

First there are a couple of items of housekeeping.  First item is that I only realized with this post that the post on the first day was wrong.  I had miscalculated my twelve days to end on Christmas day.  That is wrong!  Counting down from that first post on the 13th means the 12th day will end up on December 24th.  Maybe I will throw in a bonus 13th day post, or delay a post for a day, or just go with it.  It will be a mystery!

Second item is naturally the daily recap of the 12 days to date.

On the Ninth Day of pre-Christmas…

My DBA gave to me a Data Collection!

If only my DBA had told me that I would need to have Management Data Warehouse (MDW) preconfigured.  Well, that is not a problem.  We can handle that too.  For a tutorial on how to setup MDW, I am going to provide a link to one that has been very well written by Kalen Delaney.

The article written by Kalen covers the topic of MDW very well all the way from setting up the MDW, to setting up the Data Collectors, and all the way down to the default reports you can run for MDW.  The MDW and canned Data Collectors can provide some really useful information for your environment.

What I want to share though is a means to add custom data collections to your MDW.  To create a custom collection, we need to rely on two stored procedures provided to us by Microsoft.  Those stored procedures are: sp_syscollector_create_collection_item and sp_syscollector_create_collection_set.  Both of these stored procedures are found in the, you guessed it, msdb database.

Each of these stored procedures has a number of parameters to help in the creation of an appropriate data collection.  When creating a data collection, you will first create the collection set, and then you add collection items to that set.

There are a few notable parameters for each stored procedure that I will cover.  Otherwise, you can refer back to the links for each of the stored procedures to see more information about the parameters.

Starting with the sp_syscollector_create_collection_set stored procedure, I want to point out the @schedule_name, @name, and @collection_mode parameters.  The name is pretty obvious – make sure you have a distinguishable name that is descriptive (my opinion) or at least have good documentation.  The collection mode has two values.  As noted in the documentation, you will want to choose one value over the other depending on the intended use of this data collector.  If running continuously, just remember to run in cached mode.  And lastly is the schedule name.  This will help determine how frequently the job runs.

Unfortunately, the schedule names are not found in the documentation, but rather you are directed to query the sysschedules tables.  To help you find those schedules, here is a quick query.

[codesyntax lang="tsql"]

[/codesyntax]

Now on to the sp_syscollector_create_collection_item stored procedure.  There are three parameters that I want to lightly touch on.  For the rest, you can refer back to the documentation.  The parameters of interest here are @parameters, @frequency and @collector_type_uid.  Starting with the frequency parameter, this tells the collector how often to upload the data to the MDW if running in cached mode.  Be careful here to select an appropriate interval.  Next is the parameters parameter which is really the workhorse of the collection item.  In the case of the custom data collector that I will show in a bit, this is where the tsql query will go.

Last parameter to discuss is the collector type uid.  Like the schedule for the previous proc, the documentation for this one essentially refers you to a system view - syscollector_collector_types.  Here is a quick query to see the different collector types.

[codesyntax lang="tsql"]

[/codesyntax]

The collector type that I will be using for this example is Generic T-SQL Query Collector Type.  A discussion on the four types of collectors can be reserved for another time.

Let’s move on to the example now.  This custom data collector is designed to help troubleshoot deadlock problems.  The means I want to accomplish this is by querying the system_health extended event session.

I can query for deadlock information direct to the system_health session using a query like the following.

[codesyntax lang="tsql"]

[/codesyntax]

You may notice that I have converted to varchar(4000) from XML.  This is in large part to make sure the results will play nicely with the data collector.  Now to convert that to a query that can be used in the @parameters parameter, we get the following.

[codesyntax lang="tsql"]

[/codesyntax]

With this query, we are loading the necessary schema nodes that correlate to the Data Collector Type that we chose.  Since this parameter is XML, the schema must match or you will get an error.  We are now ready to generate a script that can create a deadlock data collector.

[codesyntax lang="tsql"]

[/codesyntax]

Upon creation, this will create a SQL Agent job with the defined schedule.  Since this is a non-cached data collector set, the Agent job will adhere to the schedule specified and upload data on that interval.

Now all we need to do is generate a deadlock to see if it is working.  It is also a good idea to introduce you to the table that will be created due to this data collector.  Once we create this collector set, a new table will be created in the MDW database.  In the case of this collector set, we have a table with the schema and name of custom_snapshots.systemhealthdeadlock.

This new table will have three columns.  One column represents the DeadlockGraph as we retrieved from the query we provided to the @parameters parameter.  The remaining columns are data collector columns for the collection date and the snapshot id.

Now that we have covered all of that, your favorite deadlock query has had enough time to finally fall victim to a deadlock.  We should also have some information recorded in the custom_snapshots.systemhealthdeadlock table relevant to the deadlock information (if not, it will be there once the agent job runs, or you can run a snapshot from SSMS of the data collector).  With a quick query, we can start looking into the deadlock problem.

[codesyntax lang="tsql"]

[/codesyntax]

This query will give me a few entries (since I went overkill and created a bunch of deadlocks).  If I click the DeadlockGraph cell in the result sets, I can then view the XML of the DeadlockGraph, as in the following.

[codesyntax lang="xml"]

[/codesyntax]

Code to generate deadlock courtesy of “SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach” by Jason Brimhall, Wayne Sheffield et al (Chapter 12, pages 267-268).  If you examine the deadlock graph you will see the code that generated the deadlock.

Since this is being pulled from the RingBuffer target of the system_health, it can prove useful to store that data into a table such as I have done.  The reason being that the Ringbuffer can be overwritten, and with good timing on the data collector, we can preserve this information for later retrieval and troubleshooting.  Deadlocks don’t always happen at the most opportune time and even less likely to occur when we are staring at the screen waiting for them to happen.

As you read more about the stored procedures used to create a data collector, you will see that there is a retention parameter.  This helps prevent the table from getting too large on us.  We can also ensure that an appropriate retention is stored for these custom collectors.

Conclusion

Creating a custom data collector can be very handy for a DBA.  Especially in times of troubleshooting.  These collectors are also quite useful for trending and analysis.  Consider this a tool in the chest for the poor man. ;)

Enjoy and stay tuned!

All scripts and references were for SQL 2008 R2.  The Deadlock script was pulled from the 2012 book, but the script runs the same for the 2008 version of the AdventureWorks database.

«page 1 of 5




Calendar
July 2014
M T W T F S S
« May    
 123456
78910111213
14151617181920
21222324252627
28293031  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, July 30, 2014