Audit Schema Change Report

Comments: 4 Comments
Published on: March 11, 2015

In a recent article on SSG, I discussed how to use Extended Events to function in a Profiler like fashion.  You can read about that here.  I recommend reading that article first because it helps to lay some of the groundwork for this article.

Within Management Studio, from a righ-click context menu, there is an option to run a report called “Schema Changes History”.  The name of this report might indicate that there is some means to find when a change occurred in the schema and potentially what the change was as well as who made the change.  It does say “Schema Changes History” after-all.

If you run the report, you would be able to see a report similar to the following.

Change_Report

 

This looks like it could be a pretty useful report.  If I drill into the collapsed items, I can get better information.  But, how exactly is this report producing this kind of data?  I don’t have a database audit running to trap this information.  I also don’t have SSDT-BI installed, so I can’t export the report and check the report definition.

I could run a trace and refresh the report data and see what pops there.  I would really hate to use Profiler though, and I really don’t want to go through the hassle of creating a server side trace to figure out the source data definitions.  Then the lightbulb moment occurs.  I already have an XE session that can trace the SQL queries just like Profiler.  Why not use a better tool to do the job and satisfy my curiosity at the same time?

So, go read that article, I wrote for SSG, to get the XE session definition in order to follow along a little better.  With that XEvent session running, I can refresh the report data and start to evaluate what is happening in order to produce the Audit report.

By this point, you have probably thought to yourself that the source of the data must be coming from the default trace.  If you thought that, then you are correct.  It is a good presumption based on a knowledge of what kind of data is being tracked in the default trace.  But I really wanted to know for certain what the source of the data was.  As I perused the XEvent session data, I noticed several queries similar to the following in the results.

XE_Capture

Yes, I am using the GUI (*shrug*) in this image to explore the data from the session.  That is just for display friendliness.  But, as you look through those queries you start to see a few patterns and some obvious signs that the source of the data is the default trace.  Here is a cleaner look at the code being used to get the data-set.

Now, I must confess that I altered it a bit to make sure it was doing what I thought.  Just a little thing like swapping a missing definition for the table variable for a quick insert into a temp table.  But the reality is, it is pretty much the source code of the report.  It is not the prettiest of code for the source, but it works.

Due to the lack of prettiness to the code and the seemingly overly complex means to get to the end result, I decided I wanted to “tune” it up a little bit.

Now, I have something that is easier to read and maintain (my opinion) and works well.  It can also be used easily enough in an RDL should you wish to create an additional report different from the canned report in SSMS.  Or just continue to use it from within Management Studio and check the results without the overhead of the report presentation.

Last Execution of a Proc

SQL Server is full of good stuff.  There are plenty of features to be used.  Plenty of applications to help it.  And there is even plenty of metadata within SQL Server to help you better understand your data and the queries that are being run.

It just so happens that a couple of clients requested some information on this additional metadata.  Both of the clients wanted something just a little different from the other.  After a little bit of thought, it came pretty clearly that what they wanted was definitely available within SQL Server.  The first client simply wanted to know the last time a procedure had been executed.  That is relatively easy enough – granted the procedure has been executed and granted the execution statistics have not been evicted.

The second client also wanted to know some information about the execution of a stored procedure.  But this time, they wanted to get the execution plan.  There are a few ways to trap an execution plan.  You could either run a trace, an XE session, or you could execute the query and grab the plan.  But if you didn’t already have an XE session running or a trace running and the client does not want you to execute the query to trap that plan?

Well, that is not a problem because SQL Server stores this stuff.  As long as the query and plan have not been evicted from the plan cache then you can get the metadata you desire.

Metadata to the Rescue

The metadata that we seek happens to be accessible through the use of several dynamic management views.  These are sometimes called dynamic management objects and are great to use to get to know your data and environment.  This time around, I am mostly interested in the following dynamic management objects: sys.dm_exec_query_statssys.dm_exec_cached_planssys.dm_exec_sql_text , and sys.dm_exec_query_plan.  I am using these objects because I want to trap execution stats along with the query SQL statement as well as the cached plan for that query.

So this is what I came up with to aid each of the clients in getting the data they sought.

Pretty simple and straight forward.  Now, I have this quick query to find the last time a proc was run, as well as a means to trap the execution plan for that query.  If I run that query, I would be able to see something like the following.

Capture

I hope this is useful to you and hope you enjoy.

Security as a Fleeting Thought

Comments: 6 Comments
Published on: February 10, 2015

Today we have another installment in what is known as TSQL Tuesday.  This month we have an invitation and topic given to us by the infamous Kenneth Fisher ( blog | twitter).

TSQL2sDay150x150Today, the invitation is for us to share our stories on how we like to manage security.  Or at least that is the request that was made by Kenneth.  I am going to take a bit of a twist on that request.  Instead of sharing how I like to manage security, I am going to share some interesting stories on how I have seen security managed.

Let’s just call this a short series on various case studies in how to manage your security in a very peculiar way.  Or as the blog title suggests, how to manage your security as an afterthought.

Case Study #1

dbsecurityWe have all dealt with the vendor that insists on the user account that will be used for their database and application be one of two things.  Either it needs to be sa or needs to be a member of the sysadmin fixed server role.  The ensuing discussion with those vendors is always a gem.  They insist the application will break, you as the diligent DBA prove otherwise, and then the senior manager sponsoring the application comes around with a mandate that you must provide the access the vendor is requesting.

Those are particularly fun times.  Sometimes, there is a mutual agreement in the middle on what security can be used and sometimes the DBA just loses.

But what about when it is not a vendor application that mandates such relaxed security for their application and database?  What if it happens to be the development group?  What if it happens to be a developer driven shop and you are the consultant coming in to help get things in order?

I have had the distinct pleasure of working in all of those scenarios.  My favorite was a client that hosted ~700 clients, each with their own database.  There were several thousand connections coming into the server and every single connection was coming in as ‘sa’.  Yes, that is correct.  There were no user logins other than the domain admins group on the server – which was also added to the sysadmin security role.  That is always a fun discussion to start and finish.  The look of color disappearing from the clients’ eyes as the realize the severity of the problem.

Please do not attempt this stunt at home.

Case Study #2

In a similar vain, another one that I have seen far too often is the desire to grant users dbo access within a database.  While this is less heinous than granting everybody sysadmin access – it is only a tad better.  Think about it in this way – does Joe from financing really need to be able to create and drop tables within the accounting database?  Does Marie from human resources need to be able to create or drop stored procedures from the HR database?  The answer to both should be ‘NO’.

In another environment, I was given the opportunity to perform a security audit.  Upon looking over things, it became very clear what the security was.  Somebody felt it necessary to add [Domain Users] to the dbo role on every database.  Yes, you read that correctly.  In addition to that, the same [Domain Users] group was added to the sysadmin server fixed security role.  HOLY COW!

In this particular case, they were constantly trying to figure out why permissions and objects were changing for all sorts of things within the database environment.  The answer was easy.  The fix is also easy – but not terribly easy to accept.

Please do not attempt this stunt at home.

Case Study #3

I have encountered vendor after vendor that has always insisted that they MUST have local admin and sysadmin rights on the box and instance (respectively).  For many this is a grey area because of the contracts derived between the client and the vendor.

For me, I have to ask why they need that level of access.  Does the vendor really need to be able to backup your databases and investigate system performance on your server?  Does that vendor need, or are they even engaged, to troubleshoot your system as a whole?  Or, do they just randomly sign in and apply application updates without your knowledge or perform other “routine” tasks unknown to you?

I have seen vendors change permissions and add back door accounts far too often.  They seldom if ever are capable of providing the level of support necessary when you are stuck with deadlocks by the second or blocking chains that tie up the entire server.  In addition, they are generally unavailable for immediate support when a production halting issue arises in their application – or at least not for a few hours.

This is specifically in regards to application vendors.  They are not your sysadmin and they are not your DBA.  If they must have RDP access or access to the database – put it under tight control.  Disable the account until they request access.  Then a request can be made and a note documented about why the access is needed.  Then the account can be enabled, monitored and disabled after a specified amount of time.

Please do not attempt this stunt at home.

This also changes when that vendor happens to be providing you IT functionality and is not specifically tied to an application.  Those relationships are a bit different and do require a little more trust to the person who is acting on your behalf as your IT staff.

Conclusion

I have shared three very dangerous stunts that are sometimes portrayed to be done by professionals.  Do not try this in your environment or at home.  It is dangerous to treat security with so little concern.  Security is not some stunt, and should be treated with a little more care and attention.

If you find yourself in any of these situations, an audit is your friend.  Create some audit process within SQL Server or on the Local server to track changes and accesses.  Find out what is going on and be prepared to act while you build your case and a plan for implementing tighter security.

Trapping Online Index Operations

Categories: News, Professional, SSC
Comments: No Comments
Published on: January 29, 2015

Recently I wrote an article about Capturing Online Index Operations.  In that article, I discussed a problem that I had encountered.  Well, there were multiple problems.  One was an issue with a vendor app that had some hidden module that was performing online index defrags that was causing corruption in a couple of indexes every night.

The second of the issues was not vendor related but more to do with pain I was experiencing with getting the XEvent session I was creating to work properly.  The problem, as I had concluded, was that it would not work to an asynchronous file target.  As it turns out, I finally got it working.

Today, I just want to share that solution.  Being able to store the results to a file target offers a few benefits (less memory consumed and persisted data) that makes it far more ideal for this problem than to use the ring buffer.

For more background on the full issue and a demonstration on the output of the data from this session, please visit the original post on SQL Solutions Group.

And for the query to parse the data that is stored, you can use something like this next query.

I hope you find this useful.  But before you proceed, I do have a caution to throw out there.  This session is dependent on the file system directory of C:\Database\XE to exist.  If it does not, you will get a nasty error message.  Despite the error message, the session will create.  And since the session creates, when you try to run the next query, you will get an additional error because the file does not exist since the path does not exist.

Why does the session still create despite an error you ask?  Well, the answer to that is due to the existence of the ring_buffer target that is also specified.  I left that in there despite not needing it. You can safely remove the ring_buffer target in this XEvent session.  If you remove that ring_buffer target and do not have the directory previously noted, then the session will not create once the error is reached.

Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.

Capture

Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

«page 1 of 85






Calendar
April 2015
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, April 1, 2015