Audit Database File Size Changes

dbsizechange

Recently I shared an article on how to track the growths and shrinks that occur within database files.  I shared that article here, you should read it before proceeding with today’s article.

Near the end of that article I declared that it was a really good method to track that information within SQL Server 2008.  What if you happen to be on SQL 2012 or SQL 2014 (as of this writing)?  Will it work there too?

Does it Work Past 2008?

It is a good question.  Does that extended event session I shared work in SQL Server 2012 or 2014?  Let’s take a quick look.  I am skipping the setup scripts for that XE session in this article, so you will need to get them from the previous article in order to follow along in this segment.

Before we can investigate if this extended event session will work, let’s take a quick look to confirm that the session is running on the server.  We can do that with a query similar to the following.

Running that query will produce results very similar to the following.

SessionCheck

This is good news.  If you noticed, I am querying a couple of views to get this information.  In the server_event_sessions catalog view I can determine if the event exists.  When checking the dm_xe_sessions DMV, I can see if the session is running or not by whether or not the session exists in the view.  When the session is enabled and running, then the DMV will return a record for it. Otherwise, the DMV does not hold a record for the session.

With a session running, we can now validate if it is running properly by running any script that will cause the files to grow or shrink.  Recall that in the previous article, it was shown that any growth or shrink operation will cause an event to fire with this session.  Here is a sample of the query I am running.

That query has three distinct segments.  The first is just to check my file sizes.  The second segment performs my file shrink operations.  And the final segment checks the file sizes again.  Here is what the first and third segments would look like on the Sandbox2 database that I used in the previous article.

filesizes_vold

With the evidence that we have the session running and that there was indeed a file size change, let’s now check the event session data and confirm whether or not the session is working on this SQL 2014 server.

The preceding is the query I am using to query the session data.  Running that query will produce the following results.

nodata

That’s right!  There is no session data despite the event having occurred and despite the session running currently.  We also know that this session works (we demonstrated it on SQL 2008).  So there must be a bug, something is broken.  Right?

What Now?

Since the extended event obviously no longer works, we are stuck with few options.  We could always try resorting back to the default trace.  After all, I demonstrated that the default trace is already trapping information about file shrinks.  That was discussed in the prior article and here as well.

So, what if we tried to go and capture all of the same information from the default trace?  We could certainly try that.  Assuming that the default trace is still running on the server, this query could get us pretty close.

And this does a fairly decent job of getting the info we seek.  Sadly, though, it does not trap all of the necessary information.  Only the DBCC event (event 116) traps the sql statement that triggered the event to be recorded in the default trace.  But for the most part it can be a decent swing at getting the information.  Without the sql statements tied to the event, I’d rather not use it because it really just shows me how many times the size changed, what time the event occurred, and the size of the change.

There has got to be some other way of getting this to work in extended events.  A good question to ask is “Why does the extended event no longer work?”

A little digging, and one might eventually find a document that can shed some light on the problem.  Reading this document, we can see why the event no longer works.  It has been deprecated.  What?  After one release, they decide to take away a critical piece of information?  How can that be?

Time to back up those findings with something a little more authoritative such as this.  Looking at this article, we see that indeed the event was deprecated.  But wait a minute, the event was not just deprecated, it was also replaced with a new event.  We are in business so let’s do some querying within event sessions.

Back in Business

We could have probably spared some time by checking the available events in SQL Server by using this next query.  However, the events used in the previously used event session still exist.  If they did not exist, the session creation would have failed.  This can be a bit misleading, so it is good to have the information from Microsoft that the events have been deprecated and merged into a single event.

This produces the desired results with the new event name specified in that Microsoft article.

2012filesizeevent

 

Based on this information, a rewrite of the extended event session is possible and necessary.  We can update the extended event session that audits when a database file changes in size.  This will look something like the following session.

And since I happened to have that session also running at the same time as the shrinkfiles that were run previously in this article.  So, I can go ahead and check to see if anything was captured.  To check the session data, I will use the following query.

In this new event for 2012 and beyond, there is different data that is captured.  This means that I have access to better information about what is happening to my database files with regards to the size changes (growths and shrinks).

Conclusion

If you just so happen to be running on SQL Server 2012 or later, you will need to change your event sessions that were tracking file changes.  It is a bit of an exercise to make the change and can be frustrating, but it is well worth it.  The improved data that can be captured is going to help better control and oversee the environment.

Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang=”tsql”]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

[codesyntax lang=”tsql”]

[/codesyntax]

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

Supported Compatibility Levels in SQL Server

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 21, 2014

It has been well documented and is well known that SQL Server supports certain older versions of SQL Server in a compatibility mode.  This setting is something that can be configured on the database properties level.  You can quickly change to an older compatibility level or revert the change to a newer compatibility level.

Changing the compatibility level is sometimes necessary.  Knowing what compatibility modes are available for each database is also somewhat necessary.  The common rule of thumb has been the current version and two prior versions.  But even with that, sometimes it is warm and fuzzy to be able to see the supported versions in some sort of format other than through the GUI for database properties.

Sure, one could go and check Books Online.  You can find that information there.  Or you could fire up this script and run with the output (as a guideline).

 

This script will return results such as the following.

Picture0002

And if we wanted to see the results for a SQL Server 2014 installation, we would see the following.

Picture0003

The output is displayed in the same format you might see it if you were to use the Database Properties GUI.  That said, if you are using the GUI in SQL Server 2014, you might run into the following.

Picture0006

Notice the additional compatibility level for SQL 2005?  If you check the documentation, you will probably find that compatibility level 90 is not supported in SQL 2014.  In fact it says that if a database is in 90 compatibility, it will be upgraded to 100 automatically (SQL 2008).  You can find all of that and more here.

If you tried to select compatibility 90, you might end up with an error.  If you are on 2014 CTP2, you will probably be able to change the compat level without error.

Anyway, this is the message you might see when trying to change to compatibility 90.

Picture0005

They sometimes say that “seeing is believing.”  Well in this case, you may be seeing a compatibility level in the 2014 GUI that just isn’t valid.  Keep that in mind when using the GUI or trying to change compatibility modes.

Table Space updated again

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: March 28, 2014

Today we have another update for an age old script on this site.  You can find the last update to the script here.

This time we have a bit of a bug update.  The details of that bug are in the notes for the script.

Alternatively, you can download the script from here (in case a plugin update breaks the script again).

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

T-SQL Tuesday #45 Follow the Yellow Brick Road

TSQL2sDay150x150

We have the distinct pleasure once again to have a party full of SQL enthusiasts.  The best part is that few if any of us are even in the same room or building for this party.

The party is known as TSQL Tuesday and this month is hosted by Mickey Stuewe (Blog | Twitter).  This month Mickey has come up with the notion that for some reason or another you may want to have something audited, tracked, traced or otherwise followed.  To quote Mickey from the invitation (which you can read here): “So, it is time to follow Dorothy and Toto down the yellow brick road and to share your experience with auditing data.”

Mickey did provide some viable options for auditing.  And I didn’t see any restrictions on what the topic of auditing couldn’t be.  So I have a slightly different spin on the topic than what appeared to be the direction that Mickey was headed with her examples and description in the invite.

That said, I will be discussing a topic along the vein of “finding out who did something stupid last week.”  And the TSQL Tuesday topic once again works out well this month for me because I have a back log of blog topics for this subject that I have been meaning to write.  So let’s get at least one of them knocked out of the way with this entry into the blog party of the month.

OzcurtBehold the Great and Powerful Audit

Have you ever come into work and been blasted first thing with “everything is slow!”

Then you start to dig and see your email reports and you begin to wonder why you have 50% of your clustered indexes jacked up to 99% fragmentation?  You know that sort of thing never happens suddenly in your environment because you have your finger on the pulse of all things DB related – or so you thought.

You check your index defrag log and see that it ran perfectly fine last night and you even see the report on fragmentation from after the defrag process was completed.  You begin to really scratch your head.  You know that you have no process that does any bulk loads or ETL batches after the index defrag process.  Nothing is really making any sense.  Then a lightbulb goes off and you check your report on database sizes.  You see that the database is now several hundred GB smaller and there is no free space left in the data files.  Free space that you had left in the data file intentionally.

300px-Steam_eruptionNow you are steamed and about to blow your top like this geyser.  Who has done something so stupid as to mess around with your data files?  We Senior DBA’s are somewhat territorial with the databases we support and we don’t like people making changes they haven’t discussed with us.  So it is perfectly understandable if you are a bit steamed by an event such as this.

The question now is: What are you going to do about it?  How are you going to find who jacked up your data files?

The answer is in the default trace.

The optimal solution may not be to have to rely on the default trace to capture an event that occurred in the past, but rather to proactively monitor it through other means (perhaps an extended event for example).  But in the event you need to capture the information because you were caught off-guard, you may be able to trap the information from the default trace.

When I run the above query, I get the following results (yes I intentionally shrunk my msdb  database explicitly for this example).

results

Now when you have a sysadmin or a junior DBA (or that manager with too many permissions) doing things they shouldn’t be doing, you have some evidence to show them.  You also can go to the person that “did something really stupid” and offer them a chance to learn why it might have been a bad thing – but do it nicely.  You don’t want to be the one that looks like a jerk.

Lost that Single-User Connection?

You have changed your database to single_user mode to do a task.  As you go about your business, you lose track of which connection was the single_user connection.  You start closing connections and mistakenly close the session that was your single_user connection.  Now you are unable to start a new single_user session.  It would seem that somebody or something has taken your connection.

Today, I am going to discuss some things you may do to get around this problem.

The first thing that may come to mind when you encounter this is “Oh crap!”  Well, no need to get too terribly worried (not unless you really hosed something up and you are trying to fix it real quick before the boss notices).

The next thing you may think of trying is how to circumvent the single_user mode.  And during that thought process you may be thinking that single_user does not really mean single_user so you might try something like start a DAC session.  Well, let’s go through that and see what would happen in a DAC session if your single_user session is stolen.

I am going to skip the part of setting a database into single_user mode because we are presuming that the condition already exists.  To start a DAC session, I am going to point you to a previous article I did on the topic – here.

To ensure I am using a DAC session, I am going to issue the following query.  This will ensure I am in the right session and that DAC is in use.

[codesyntax lang=”tsql”]

[/codesyntax]

In my case, this results in an endpoint with the name “Dedicated Admin Connection” and a spid of 84.  Good, I am in the correct session for the rest of this test.  Next, I will issue a Use database statement.  I have created a test database called ClinicDB.  So I will issue the following.

[codesyntax lang=”tsql”]

[/codesyntax]

I get the following result.

[codesyntax lang=”tsql”]

[/codesyntax]

So, that blows that idea right out of the water.  It shouldn’t really have been a consideration in the first place because single_user really means just that – single_user.

Now What?

Well, what do you think we could do now to circumvent this little problem and get that single_user session back?

That requires a little investigative work.  It is time to find out who has taken the single_user session and politely ask them to give it up.  To make that task a little easier, we could modify the previous query to find out who has that single_user session (thus limiting how many people we have to ask).  I have modified the following query to use sys.sysprocesses so I could limit the results to the ClinicDB.  This is a limitation of SQL 2008 R2 and older versions.  Getting the database reliably means using sysprocesses.  Despite the database_id being available in other related DMVs, it’s just not that easy.  One would think you could use sys.dm_exec_requests.  But if a request is not active, an entry won’t exist for that session.  This problem is fixed in SQL 2012 since the sys.dm_exec_connections DMV now has the database_id field.  Enough of that birdwalk and on to the query.

[codesyntax lang=”tsql”]

[/codesyntax]

I chose not to do an entirely new query to simply demonstrate that it was possible with a very small tweak to what has been already used.

Now that you know (in my case I can see that I have a session open with ID = 80 that is connected to that single_user database), I can walk over to the person (knowing his/her login id and computer name) and politely ask them to disconnect.

In the end, this is really an easy thing to resolve.  Sure it may take some people skills – but that doesn’t make the task too terribly difficult.  Next time this happens to you, just remember you can run a quick query to find who has sniped that single_user session.

Audit Configuration Changes

Do you know the last time a Server Property was changed on your instances of SQL Server?

Are you wondering when the setting for max degree of parallelism was changed?

Do you know who changed the setting?

In some environments there are a lot of hands trying to help mix the pot.  Sometimes more hands can make light work.  This is not always the case though.  More hands in the mix can be a troublesome thing.  Especially when things start changing and the finger pointing starts but nobody really knows for sure who made the change or when the change was made.

I know, that is why there is this concept called change management.  If you make a change to a setting, it should be documented, approved and communicated.  Unfortunately the process does not always dictate the work flow and it may be circumvented.  This is why auditing is a tool that is in place and should be in place – like it or not.

Auditing can be a very good tool.  Like many things – too much of a good thing is not a good thing.  Too much auditing can be more of a hindrance than help.  You don’t want to cause interference by auditing too many things.  You also don’t want too much data that the facts get blurred.  I hope that this script strikes more of a balance with just the right amount of data being captured to be of use.

The basic principle to auditing server configs is to find what values changes, when they were changed and by whom.  There are ancillary details that can be helpful in tracking the source of the change such as the hostname of the source computer making the change.  These are all things that we should capture.  But if a setting hasn’t changed – then we need not necessarily report that the setting was unchanged (it should go without saying).

So for this, I created a couple of tables and then a script that I can add to a job to run on a regular basis.  I can put the script in a stored procedure should I desire.  I’ll leave that to you as an exercise to perform.

[codesyntax lang=”tsql”]

[/codesyntax]

Here I am trapping the config settings on a daily basis (as the script is written for now). I then cross reference the current settings against the previous settings.  Then I check the default trace to see if any of those settings exist in the trace file.

The default trace captures the data related to configuration changes.  On busy systems, it is still possible for these settings to roll out of the trace files.  For those systems, we may need to make some alterations to the script (such as running more frequently and changing the comparisons to account for smaller time intervals than a day break).

To ensure proper comparison between the audit tables and the trace file, note the substring function employed.  I can capture the configuration name and then join to the audit tables on configuration name.

This has proven useful to me so far in tracking who did what to which setting and when they did it.

I hope you will find good use for it as well.

SSRS Subscriptions Report

As a part of my series leading up to Christmas 2012, I shared a script to Report on SSRS Subscriptions.  It was soon found to have a bug with SQL Server 2008 R2 SP2.  IN the comments on that post, I promised to post an updated script.  Here is that update – without the bug.

 

The inline code seen above likes to reformat and and will throw an error due to capitalization of the function value and text().  Download the script here: SSRS_SubscriptionsV1_5

«page 1 of 5






Calendar
November 2014
M T W T F S S
« Oct    
 12
3456789
10111213141516
17181920212223
24252627282930
Content
SQLHelp

SQLHelp


Welcome , today is Friday, November 28, 2014