We have the distinct pleasure once again to have a party full of SQL enthusiasts. The best part is that few if any of us are even in the same room or building for this party.
The party is known as TSQL Tuesday and this month is hosted by Mickey Stuewe (Blog | Twitter). This month Mickey has come up with the notion that for some reason or another you may want to have something audited, tracked, traced or otherwise followed. To quote Mickey from the invitation (which you can read here): “So, it is time to follow Dorothy and Toto down the yellow brick road and to share your experience with auditing data.”
Mickey did provide some viable options for auditing. And I didn’t see any restrictions on what the topic of auditing couldn’t be. So I have a slightly different spin on the topic than what appeared to be the direction that Mickey was headed with her examples and description in the invite.
That said, I will be discussing a topic along the vein of “finding out who did something stupid last week.” And the TSQL Tuesday topic once again works out well this month for me because I have a back log of blog topics for this subject that I have been meaning to write. So let’s get at least one of them knocked out of the way with this entry into the blog party of the month.
Have you ever come into work and been blasted first thing with “everything is slow!”
Then you start to dig and see your email reports and you begin to wonder why you have 50% of your clustered indexes jacked up to 99% fragmentation? You know that sort of thing never happens suddenly in your environment because you have your finger on the pulse of all things DB related – or so you thought.
You check your index defrag log and see that it ran perfectly fine last night and you even see the report on fragmentation from after the defrag process was completed. You begin to really scratch your head. You know that you have no process that does any bulk loads or ETL batches after the index defrag process. Nothing is really making any sense. Then a lightbulb goes off and you check your report on database sizes. You see that the database is now several hundred GB smaller and there is no free space left in the data files. Free space that you had left in the data file intentionally.
Now you are steamed and about to blow your top like this geyser. Who has done something so stupid as to mess around with your data files? We Senior DBA’s are somewhat territorial with the databases we support and we don’t like people making changes they haven’t discussed with us. So it is perfectly understandable if you are a bit steamed by an event such as this.
The question now is: What are you going to do about it? How are you going to find who jacked up your data files?
The answer is in the default trace.
WHEN 1 THEN 'YES'
END AS IsSystem
/* Check IsSystem to help ensure Autoshrink didn't cause the shrink
MSDN indicates 1=system and 0=user
My testing shows Null = user and 0 did not occur */
( SELECT TOP 1 f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
WHERE T.TextData LIKE '%SHRINKFILE%'
AND T.EventClass = 116
/* Event Class 116 occurs when a DBCC command is issued */
The optimal solution may not be to have to rely on the default trace to capture an event that occurred in the past, but rather to proactively monitor it through other means (perhaps an extended event for example). But in the event you need to capture the information because you were caught off-guard, you may be able to trap the information from the default trace.
When I run the above query, I get the following results (yes I intentionally shrunk my msdb database explicitly for this example).
Now when you have a sysadmin or a junior dba or that manager with too many permissions doing things they shouldn’t be doing, you have some evidence to show them. You also can go to the person that “did something really stupid” and offer them a chance to learn why it might have been a bad thing – but do it nicely. You don’t want to be the one that looks like a jerk.