T-SQL Tuesday #104: Just Can’t Cut That Cord

We all have our favorite scripts, tools or utilities. Those are the things that help make our jobs easier. Some of us may have an unhealthy relationship with some of those scripts (similar in nature to the relationship many have with their phone). Whether or not the need to cut that proverbial cord exists, today we are not discussing the health of that dependence. Suffice it to say, sometimes we simply need to upgrade our scripts. How else can we get better scripts or make our scripts better – by sharing them.

This is precisely the goal Bert Wagner (b | t) seems to have envisioned for the 104th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

For this month’s T-SQL Tuesday, I want you to write about code you’ve written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn’t need to be completely original either – maybe you’ve improved the code in some open source project to better solve the problem for your particular situation.”

There is a high probability that through the sharing of your script, somebody out there can benefit from that script. In addition, it is very likely that somebody will make a suggestion to help make your script better. Worst case (emphasis on worst case here), you have the script stored somewhere with half decent instructions on what it does and making it easily accessible for you to use again and again. Just in case you forget you have it out there – you can google for it again and find it on your own blog ;).

Personally, I have been able to find and re-use some of my older scripts. Not only do I get to re-discover them, but I also get to re-imagine a new use or improvement for the script.

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Easy Access

While pondering the topic for today, I had the thought occur about how frequently I post a script on my blog already anyway. An easy out for this topic would have been to re-share one of those old scripts. For instance, I could easily redo a recent article about server access that has a couple scripts demonstrated in it. Or I could go back a few years to my articles about foreign keys (here or here) and space use (here or here). Even more intriguing could be to re-envision some of my articles on Extended Events. But where would the fun in that be?

Rather than take the easy road and rehash something, I have something different. This one goes hand in hand with the numerous articles and scripts I have previously provided on auditing – yet it is different.

Not every shop can afford third party software or even Enterprise edition and so they have to come up with a different way to audit their database instances. One of the problems with a home grown solution is to ensure the data is not stored local to the server (lots of good reasons for that). Here is an example of what I did for one client that happened to have a developer that found a back door that was giving him SA access to the SQL Server Instance and was changing things and trying to cover his tracks – even after being warned.

First the query

This query will be run from a job on a different server that is restricted in access to just a select few people. I do rely on the use of the default trace in this query. I am also reliant upon a little bit of sneaky behavior. If I run this from a separate server, prying eyes are usually unlikely to find that it is running and thus makes it easier to catch them red-handed. In addition, if they discover via some sort of trace and by a lot of luck that it is running, then they have no access to the remote server to alter anything that was captured.

The query does go out to the default trace and pull back any changes to permissions or principals on the server in question. The captured data is then stored in a database that is also restricted to a select few people. Lastly, the captured data can be routinely queried, or automated reports can be created to send email notifications of changes encountered.

The second part of the trickery here is that I am using a linked server to perform the queries (a slight change and I could also do this via powershell which will be shown in a future article). The linked server query uses the openquery format and sends the default trace query to the remote server. Since I am running this from a job on an administrative server that pulls a limited data set, I am not overly concerned with the linked server setup here.

Storing It

Once I query the data, I need to put it somewhere on my administrative server. The table setup for that is very straight forward.

After creating this table, I am ready to store the data. All I need to do is throw the audit query into an agent job and schedule it to run on a regular schedule. For my purposes, I usually only run it once a day.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

A Haunting TSQL Tuesday Tale

Comments: 1 Comment
Published on: October 11, 2010

Time for the ghouls and goblins to come out of the woodwork once again for another tale of deception and tyranny.  OK maybe not tyranny but definitely a haunting tale is to be told this month while trick-or-treaters plan their routes and mend their costumes.  We have SQueeLing experts suiting up for the adult party known as TSQL Tuesday.

This month the vampires, ghosts, and zombies are all breaking it down at Sankar Reddy’s place (Blog | Twitter).  The ghoulish stories to be shared are to be myths from the underSQLworld.

Truncating a Table is Unrecoverable

Have you heard this one in the past?  If you truncate a table, you cannot undo the action like you can with a delete.  What is the difference between the two methods to remove data from a table?

When you truncate a table, the pages are deallocated with minimal locking of the resources.  When you use the delete statement, the data is deleted row-by-row with more locks.  Everybody knows that a delete can be rolled back.  Many people believe that a Truncate cannot be rolled back.

Well, at this point we really need a table to test.  So let’s create a table and perform a few simple tests.

With the test table ready to go, we will begin with the baseline demonstration on the Delete command.  Here is the script to demonstrate that a Delete can be rolled back (but we already knew that).

Pretty simple and straight forward – baseline is proven with that script.  The premise that a Delete can be rolled back has been established.  Now, how about that truncate statement?  For the truncate statement, I will use the same simple script, substituting the appropriate commands in this script.

When this script is executed, we see that indeed the data is recoverable.  Here are the results I get from the truncate and associated rollback script.

Conclusion

I think this is one of those myths that is frequently floated throughout the SQL world due to another myth that is related to the Truncate statement.  It is often explained that you can’t recover a truncate because the truncate is not logged.  That is also obviously wrong (if it weren’t logged, how could you recover the table without a restore?).  The myth about Truncate being non-logged has already been debunked (as has this one – in a different series) by Paul Randal in his Myth-A-Day series.  The key to this is that the Truncate is logged and that you wrap the command in a transaction – and only commit when you are ready.

This happens to be a good post for the back to basics series. Other topics in the series include (but are not limited to): Backupsbackup history and user logins.

page 1 of 1

Calendar
July 2018
M T W T F S S
« Jun    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Welcome , today is Thursday, July 19, 2018