Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang=”tsql”]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

[codesyntax lang=”tsql”]

[/codesyntax]

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

Supported Compatibility Levels in SQL Server

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 21, 2014

It has been well documented and is well known that SQL Server supports certain older versions of SQL Server in a compatibility mode.  This setting is something that can be configured on the database properties level.  You can quickly change to an older compatibility level or revert the change to a newer compatibility level.

Changing the compatibility level is sometimes necessary.  Knowing what compatibility modes are available for each database is also somewhat necessary.  The common rule of thumb has been the current version and two prior versions.  But even with that, sometimes it is warm and fuzzy to be able to see the supported versions in some sort of format other than through the GUI for database properties.

Sure, one could go and check Books Online.  You can find that information there.  Or you could fire up this script and run with the output (as a guideline).

 

This script will return results such as the following.

Picture0002

And if we wanted to see the results for a SQL Server 2014 installation, we would see the following.

Picture0003

The output is displayed in the same format you might see it if you were to use the Database Properties GUI.  That said, if you are using the GUI in SQL Server 2014, you might run into the following.

Picture0006

Notice the additional compatibility level for SQL 2005?  If you check the documentation, you will probably find that compatibility level 90 is not supported in SQL 2014.  In fact it says that if a database is in 90 compatibility, it will be upgraded to 100 automatically (SQL 2008).  You can find all of that and more here.

If you tried to select compatibility 90, you might end up with an error.  If you are on 2014 CTP2, you will probably be able to change the compat level without error.

Anyway, this is the message you might see when trying to change to compatibility 90.

Picture0005

They sometimes say that “seeing is believing.”  Well in this case, you may be seeing a compatibility level in the 2014 GUI that just isn’t valid.  Keep that in mind when using the GUI or trying to change compatibility modes.

Table Space updated again

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: March 28, 2014

Today we have another update for an age old script on this site.  You can find the last update to the script here.

This time we have a bit of a bug update.  The details of that bug are in the notes for the script.

Alternatively, you can download the script from here (in case a plugin update breaks the script again).

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

T-SQL Tuesday #050: Automating Database Restores

Comments: 1 Comment
Published on: January 14, 2014

Here it is time for the party of the month for the SQL Server acolytes and I was running a bit behind.  Why?  Well, that was due in part to me rushing around trying to do some of what this months topic is.  Some would call that pretty could timing.

TSQL2sDay150x150Hemanth.D (blog|twitter) is hosting the T-SQL Tuesday blogging party this month, and he wants all of us to post about Automation. As Hemanth.D put it on his blog in his own words:

You could write about, what options you would consider when automating something? Where do you draw the line? What are our preferred tools for automation? T-SQL, PowerShell, VBScript or Batch files(?) or maybe just share something that you automated in the last couple of years.

You can read the invite he posted here.

History

As Hemanth.D mentioned in his invitation, this is not the first time this topic has come up for TSQLTuesday.  As it would happen, I also participated in the first go around with my contribution about sizing databases on limited information.  You can read that here.

This time around, I have a little bit of a different topic to approach.  I hadn’t considered this until after having read that Wayne Sheffield wrote about his efforts to verify backup files via script (automation).  You can read what Wayne wrote at this link.

Having read that, it seemed painfully obvious to me that I should go ahead and write about my efforts to automate backup restores.  After all, if you are verifying the backup files existence, you might also want to test the backups to ensure they are good.  Besides, we all need to test our backups anyway, right?

Present

I have a few different methods I have used over the years to automate restores.  In one iteration, the restore operations were hard coded in a procedure that was scheduled to run on a nightly or weekly basis.  It probably was also just hard coded to be specific to a database.  That kind of setup is not super useful except for that one database.

With that in mind, I worked on several iterations to help create a script for myself that would automate the restores of any database, with any number of backup files, to a different server, and not have file paths/names/anything hard-coded.  Well – there ended up being one thing hard-coded but that can be changed easily enough.

I decided on a script that would read the backup path for the most recent backup file from the msdb database of the server where the database was backed up.  I do this via a linked server that can be created/destroyed in the proc or that can reside permanently (I prefer to leave it in place).  Take the filepath of that backup file and restore it to the destination server.  All of this via tsql.

Now a caveat with this solution is that the backup path works best if it is a UNC path.  Even if you are backing up the database to the local server, backing up to UNC means that the restore operation can just grab that path and not encounter errors due to drive mappings (e.g. backed up to D but the D on the restore server is the cd-rom drive).

What if you don’t want to restore the source database with the same name to the new server?  Well, that has also been considered and a parameter can be passed to the stored procedure to allow for a new database name.  What if the default file paths are different?  That consideration has been made too!  All of that said, more testing is always welcome.

The script can be evaluated from here.

Next steps:

With the script, the next things to do would be to create SQL Agent jobs to run the script on a routine basis.  Test the script and verify it.

User of this script assumes all risk.

Audit Configuration Changes

Do you know the last time a Server Property was changed on your instances of SQL Server?

Are you wondering when the setting for max degree of parallelism was changed?

Do you know who changed the setting?

In some environments there are a lot of hands trying to help mix the pot.  Sometimes more hands can make light work.  This is not always the case though.  More hands in the mix can be a troublesome thing.  Especially when things start changing and the finger pointing starts but nobody really knows for sure who made the change or when the change was made.

I know, that is why there is this concept called change management.  If you make a change to a setting, it should be documented, approved and communicated.  Unfortunately the process does not always dictate the work flow and it may be circumvented.  This is why auditing is a tool that is in place and should be in place – like it or not.

Auditing can be a very good tool.  Like many things – too much of a good thing is not a good thing.  Too much auditing can be more of a hindrance than help.  You don’t want to cause interference by auditing too many things.  You also don’t want too much data that the facts get blurred.  I hope that this script strikes more of a balance with just the right amount of data being captured to be of use.

The basic principle to auditing server configs is to find what values changes, when they were changed and by whom.  There are ancillary details that can be helpful in tracking the source of the change such as the hostname of the source computer making the change.  These are all things that we should capture.  But if a setting hasn’t changed – then we need not necessarily report that the setting was unchanged (it should go without saying).

So for this, I created a couple of tables and then a script that I can add to a job to run on a regular basis.  I can put the script in a stored procedure should I desire.  I’ll leave that to you as an exercise to perform.

[codesyntax lang=”tsql”]

[/codesyntax]

Here I am trapping the config settings on a daily basis (as the script is written for now). I then cross reference the current settings against the previous settings.  Then I check the default trace to see if any of those settings exist in the trace file.

The default trace captures the data related to configuration changes.  On busy systems, it is still possible for these settings to roll out of the trace files.  For those systems, we may need to make some alterations to the script (such as running more frequently and changing the comparisons to account for smaller time intervals than a day break).

To ensure proper comparison between the audit tables and the trace file, note the substring function employed.  I can capture the configuration name and then join to the audit tables on configuration name.

This has proven useful to me so far in tracking who did what to which setting and when they did it.

I hope you will find good use for it as well.

Backup History

Categories: Corner, News, Professional, Scripts, SSC
Comments: 5 Comments
Published on: November 30, 2012

This is a quick rewrite of the script I just posted a few days ago.  The script had a few flaws if trying to run it on a SQL 2005 box.  I have updated it to run against SQL 2005 and up.

This version is done using Dynamic SQL to help achieve some of the functionality that I wanted but while allowing the flexibility of the different SQL Versions.

You can see the old script here, shown as a part of investigating why differential backups were failing to restore.

 

 

Running the above code, I get the following for one of my SQL instances.

Last Known Good CheckDB

Comments: 4 Comments
Published on: November 20, 2012

Diligent DBAs like to check their databases for a little thing called corruption.  Some of those DBAs are masterful at keeping records and logging that the database was checked and came up with a clean bill of health.

There are many different ways of logging this kind of activity.  Today I will share one such way to track when the last successful run of Checkdb happened.

First a little back story

A question came across twitter on the SQLhelp hashtag. The question was “Does restoring a database clear the dbi_dbccLastKnownGood value on the boot page?”

This question prompted me to do a quick test to see.  The test is simple.  Create a small database, backup the database, run checkdb, check to see what the dbi_dbccLastKnownGood value is, restore the previous backup and check the dbi_dbccLastKnownGood value again.

So here is a quick script

[codesyntax lang=”tsql”]

[/codesyntax]

If you run this little test, you should observe that the date value for dbi_dbccLastKnownGood changes with each test.  Now let’s discuss the question and the answer to that original question about whether or not this value gets cleared.

The value does not get cleared.  The value does not get reset.  The cause for the change in the value that you have observed is due simply to the boot page having been restored.  If CheckDB has never been run on the database, you will get the SQL default date of ‘1900-01-01 00:00:00.000′.

And then…

Now that the back story is told, that brings us to how to track this – at least one such method.  I had to cover the back story since it is what prompted the writing of a method to gather this information in a quick script so I could use it to monitor.  Yes, it is just another tool to throw into the toolbox.  And to reiterate, it is by no means the only way to track or gather this type of information.  Some are more elaborate than others.  It is up to you to choose.

Keeping in mind that we can get the last time that Checkdb completed without a report of corruption, I delve into this cursor based method to retrieve the dbi_dbccLastKnownGood value for all databases within your SQL Server Instance.

[codesyntax lang=”tsql”]

[/codesyntax]

You can use this or find something else if you desire.  The sole purpose of this script is to provide a means to gather quickly the last date known to report a clean bill of health from Checkdb.  From there, I will leave it to you to add it to another process in order to track over the long term.

You can read the twitter conversation here.  I also have it on good authority that Bob Pusateri (twitter) will be blogging about the same topic.

Maintenance Plan Owner

Comments: 5 Comments
Published on: June 25, 2012

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.  Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

 

 

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL 2005, we can use the following to find that I am now the owner of that maintenance plan we just created.

[codesyntax lang=”tsql”]

[/codesyntax]

For 2008, a slight modification yields the following query that returns the same results.

[codesyntax lang=”tsql”]

[/codesyntax]

 

Notice that in both queries, I delve out in the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages  tables respective to version of SQL Server.  These queries would yield the following result set for that new Maintenance Plan that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

[codesyntax lang=”tsql”]

[/codesyntax]

SQL 2008

[codesyntax lang=”tsql”]

[/codesyntax]

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

«page 1 of 4






Calendar
December 2014
M T W T F S S
« Nov    
1234567
891011121314
15161718192021
22232425262728
293031  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, December 19, 2014