Easy Permissions Audit

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: January 21, 2019

Something I have written about more than a handful of times is the need to audit. When people think about audits, the first thing that comes to mind is most likely the IRS and taxes. More than taxes are audit-able. Despite that tendency to first think taxes when somebody says “audit”, I am not writing about taxes. I will typically write about the different topics within SQL Server that can be audited. Has Johnny performed a logon to the server? When was the last time the permissions to the database changed? Did somebody change an object? When was the last time stats were updated? How about auditing success and failure of your backups (or all agent jobs for that matter)? Those are the topics I will typically write about. Today, I am going to share a simple method to help perform an easy permissions audit permissions – on a manual basis.

Easy Permissions Audit

As the article title denotes, today I will be discussing a simple way to get quick permissions for various principals. If you are looking for a more comprehensive and human friendly report version, I recommend reading any of my many other articles on the topic such as the following article – here or here. In the second of those links there is a clue as to what tool we will be using in this easy version of the audit. That tool is called sp_helprotect.

The stored procedure sp_helprotect is a system stored procedure from Microsoft that can help divulge permissions for various principals in a simple table result set for you. Bearing in mind that I am keeping this to a simple audit, the examples will be simplistic in nature. Without further ado, here is the easy audit for your permissions.

sp_helprotect

This stored procedure was introduced in SQL Server 2008 and comes with a few parameters to help narrow the results down to a specific principal and even to any object to which that principal may have been granted permissions. Here are those parameters for quick review:

@name = This parameter is to filter your request down to a specific object or a statement that can be executed against that object (e.g. alter, create, drop)

@username = Is the name of the principal for which permissions are returned.

@grantorname = Is the name of the principal that granted permissions.

@permissionarea = This is the group of grant-able permissions. There are two types of groups: object and statement. The default setting here is to return both groups.

The easiest way to use sp_helprotect is as follows:

Do you see how easy that is? This returns the following results for me.

Note from the results that I see results for roles and users for various different objects. This is due to how the procedure was executed – with no parameters. Using no parameters in this query, the default behavior is to return as much information as possible for all objects and principals within the database.

What if I only want the results for a principal named “Gargouille”? I can do that in the following way.

Now, I will receive the following results:

Recap

There are many ways to produce an audit. Today, I have shown how one can produce a permissions audit when in a hurry that will produce a simple result set for database permissions. I want to underscore that this was at the database level and not the server level. While this is an adequate means for a quick peek into some of the objects and granted permissions, I do recommend using one of the other methods I have introduced in the other articles for ongoing complex audits and results that are somewhat prettier and more human friendly to read.

For more articles on audits and auditing check here and here.

Audit SQL Agent Jobs

One probably seldom thinks of the SQL Agent jobs scheduled on the SQL Server instance – unless they fail. What if the job failed because something was changed in the job? Maybe you knew about the change, maybe you didn’t.

Once upon a time, I was in the position of trying to figure out why a job failed. After a bunch of digging and troubleshooting, it was discovered that the job had changed but nobody knew when or why. Because of that, I was asked to provide a low cost audit solution to try and at least provide answers to the when and who of the change.

Tracking who made a change to an agent job should be a task added to each database professionals checklist / toolbox. Being caught off guard from a change to a system under your purview isn’t necessarily a fun conversation – nor is it pleasant to be the one to find that somebody changed your jobs without notice – two weeks after the fact! Usually, that means that there is little to no information about the change and you find yourself getting frustrated.

To the Rescue

When trying to come up with a low to no-cost solution to provide an audit, Extended Events (XE) is quite often very handy. XE is not the answer to everything, but it does come in handy very often. This is one of those cases where an out of the box solution from XE is pretty handy. Let’s take a look at how a session might be constructed to help track agent job changes.

With this session, I am using degree_of_parallelism as a sort of catch-all in the event that queries that cause a change are not trapped by the other two events (sql_statement_completed and sp_statement_completed). With the degree_of_parallelism event, notice I have a filter to exclude all “Select” statement types. This will trim some of the noise and help track the changes faster.

Looking at data captured by this session, I can expect to see results like the following.

And the degree_of_parallelism event will catch data such as this.

In this example, the deletion of a job was captured by the degree_of_parallelism event. In addition to catching all of the various events that fire as Jobs are being changed and accessed, one will also be able to get a closer look at how SQL Agent runs about its routine.

The Wrap

Extended Events can prove helpful for many additional tasks that may not be thought of on an every day basis. With a little more thought, we can often find a cool solution via Extended Events to help us be better data professionals. In this article, we see one example of that put to use by using XE to audit Agent Job changes.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

For another interesting article about SQL Agent, check this one out!

Quick Permissions Audit

Whether it is for a client, an audit, or just for good housekeeping, DBAs will often need to figure out who has access to what.  In addition, they may need to know by what means people have access within your databases.

When that need arises, it is frequently adequate to just perform a quick audit and then dive deeper if the quick audit shows anything questionable. Today, I am sharing a means to perform this quick audit. I will start with the logins (server principals), peek at the users (database principals), then the server roles, and wrapping it up with the database roles. What I don’t do is dive into the windows group memberships.  So if a windows group exists within SQL Server as a login, I do not try to figure out who is a member of that role.  That can be obtained, but is a separate discussion.

So, in the vein of a quick permissions audit, I will quickly get to the script to help perform this permissions audit.

As you look through this code, you will notice that I have done a few interesting things.  Here is a quick recap:

  • Build a temp table based on a delimited list of databases input into a variable
  • Build a temp table based on a delimited list of logins input into a variable
  • Used a string splitter known as DelimitedSplit8K by Jeff Moden (google it) or LMGTFY
  • Used FOR XML to build a delimited string of Server Role Permissions
  • Used FOR XML to build a delimited string of Database Role Permissions
  • Used a CURSOR to build a dynamic SQL string to execute for each database

Holy cow! That is a lot of stuff for such a simple quick audit.  A cursor? Seriously? Well yeah! It is OK in this type of scenario to use a cursor.  This is a well-performing cursor for an administrative function. In addition, I chose to use sub-queries to build my delimited string of permissions so I could keep those things together in a single record per login/role.

If I run that query on my instance of SQL Server, I would end up with something that looks like the following.

Now, obviously, unless you had these same databases, logins, and even the same permissions assigned to these logins, your results will be different. So, make sure you go and change those inputs for logins and databases to be queried.

For more articles on audits and auditing check here and here.

Auditing when Database Files Change

As a Database Administrator, something that should be part of your database audit is monitoring the growth of files. That means tracking when the log file grows and tracking when the data file(s) grow(s). Many will argue that there is nothing worse than not knowing when a database-related file changed in size except maybe when it rapidly and constantly grows until the disk is filled.

Beyond just trapping when these events occur, the DBA usually also wants to know what caused the file to grow or shrink. This has been a problem that has plagued many a DBA over time. Sure there are plenty of methods to assist in the capture of such things. I even wrote about that quite some time ago here. In that article, I gave the tools to help track when a transaction log grew. Today, I am looking to share a different method on how to trap the executing SQL that causes a file to grow. And unlike that previous method, this method will also help to track when a file decreases in size.

I do want to point out that there is an alternate method to capture the shrink events if you wish to read more. I wrote previously about using the default trace to capture when a shrink occurs within the instance. This can be captured thanks to the default trace. You can read all about that over here. Despite the availability of the information within the default trace, I think the method I share here will be lightweight enough that it won’t hurt to have it running—at least occasionally.

Enter the Database Audit

If you have become accustomed to reading my articles, you will probably surmise that there is some degree of setup that needs to be done before I get to the meat of the matter. That holds true today. It just wouldn’t be right to share a method to do something without a harness to show how to get there. So, let’s get the setup out of the way.

Since we will be wanting to create something that will track when a file changes in size, we will need to have a baseline of the size of the files for the newly created Sandbox2 database. Let’s capture that with the following query. Small note on this query is that I am including the tempdb related file sizes. You don’t need to include that, but it may be of interest for future testing.

That is all the setup that is needed at this point. We are now ready to implement a tool that can help us monitor these changes. This tool is lightweight and pretty efficient at capturing the info that would be essential for the exceptional DBA to keep on top of the changes occurring in his/her environment. This tool comes to us in the form of an extended event. Let’s do a little checking into what is available before we dive into the actual XE session itself.

First, when you are curious, if there is an extended event that may be sufficient for your needs, you should check the event store to see what is there. We can do that via a query similar to the following.

This query may return something along the lines of what we see in the following image:

From that list, we can immediately see that we have some options to help us try and capture what is happening with our databases when our back is turned. From here, we can query to find what kind of data is captured with each of these events. To do that, we simply need to run a query such as the following query:

Looking at the available data points in each of these sessions can prove promising. It is indeed promising enough that we can now proceed with the creation of an XE session.

I decided to just focus on the two events that included file_size_changed in the name of the event. You can also see that I chose to send this to two targets. You would be fine to just send this to an asynchronous file target. Just bear in mind that you do need to specify a path for the file_target that does exist or an error will be thrown. I have also specified that this session will restart on server startup and then I turned the session to the started state.

Does it work?

That is a really good question. Now that we have a test database and a session to trap events, all we need is to figure out how to test it. We would want to do a few different things to test—like grow and shrink a file in some way. And that is exactly what we are going to do at this point.

First test, since we have a really small empty database, is to try and force the database to grow by inserting some data. We will do that with this next query.

I didn’t post the numbers earlier for my files for the Sandbox2 database, but they were small. I had a 2MB data file and a 1mb log file. Now, when I look at the results from that last query that included the file size information, I will see a database that has grown a fair amount.

That should be enough of a change to have triggered something in our extended event session. Let’s take a look at the session. To do that, I am going to pull out a query to help parse the XML and see what has happened. Here is that query:

Despite having two targets in my session, we will only cover the query that helps parse the data from the asynchronous file target. When I run that query against the files that exist for this session I get a result set of 90 records on my system. That probably seems like a ton of results for such a small increase in the database. As it would happen, I left the growth settings at the default growth increments (don’t do that in a production system) and this means I get a ton of growth activities at very small numbers (1mb for the data file and 10% for the log file). Here is a sample of the output:

You can see how this might be helpful for when a process runs hog wild in the environment. What about going in the other direction though? What if we need to know about when the database is shrunk or when a file is shrunk? Well, let’s do that too. Let’s try the following query.

Again, I check for a baseline on the files to see if the file sizes changed. In this case, you can run that final query and compare or just trust me on it. Having shrunk both files in the Sandbox2 database, let’s check the XE session data again:

Check that out! We have captured the shrink events too! There are multiple shrink events in this case only because I ran the shrink statements multiple times. This is excellent news for everybody that is trying to keep an eye on these database size changes. You can see in the XE session that I applied the sql_text() action. I have done this so I will be able to see what query caused the growth or shrink event to occur. Even better news is that this event session is perfect for those of you still on SQL 2008.

If you enjoyed this article, check out some of the follow-up articles: Data Growth Audits or Mysterious Growth.

Audit Database Offline Events

Categories: News, Professional, Scripts, SQLBP, SSC
Comments: No Comments
Published on: November 13, 2018

Knowing when an event occurred within the database environment is a very important thing. Being able to act quickly when certain events occur is equally as important. Sometimes, we may not find out about an event for a few days or weeks and then we are asked to figure out the who, when, why and how of the event. Hopefully there was enough foresight in place to plan for such a request to travel back in time.

Recently while networking at a large database related conference, I had the chance to discuss some of this with a semi-desperate DBA. In his environment, apparently a database can be taken offline at random times without the involvement of the DBA team. He was looking for a solution to prevent it from happening. For me, the simplest solution is to revoke permissions from the person doing it. If you can’t revoke the permissions, then at least capture the pertinent information to document who and when the databases are being taken offline.

Auditing and Facts

I am a huge fan of auditing the database environment to gather facts. There is comfort that can be gained in knowing that if a change happens to a database or server under your purview, that you can report some facts back up the chain when a post-mortem is required. I recently shared another example of such an auditing scenario – here.

There is a technique in that article that may be of interest for you if you are restricted to implementing this kind of audit. I won’t go into detail about the poor man audit techniques in this article, so it is worth reading that previous article.

When looking to find data about a database being taken offline, one could capture the data from the default trace or from the SQL Server Error Log. While this article will show how to capture this data from both of those sources, it is important to understand that the data just might not be available in either location depending on settings and activity of the server combined with how old the offline event was.

Lets first take a look at the default trace. That one should be easy enough.

And a snippet of some possible output looks like the following:

If you look closely at the output, you will probably notice that I don’t get a lot of information. I find out that there was an ALTER event to a database but I do not know what the event was nor do I see the text from the SQL statement that caused the change. So, at best, I get a partial view of the picture by looking at just the default trace.

With that in mind, let’s look at the error log and see if that provides better information.

This is pretty cool here. I am checking all of the log files available on SQL Server and scanning for a search term of “offline.”

Ok, this shows a little bit of promise. I can see that the database

was indeed set to offline (or “option OFFLINE” was set to ON) with a spid and a timestamp. Unfortunately, the data shown here is far from being super useful in a post-mortem. The error log does not show who took the database offline.

What if, there was a way to combine these two queries into a single result set and better correlate the results into something highly useful? Something like that seems like it could be possible given we have a spid and timestamp in each result set.

Let’s try it and see.

If I use the timestamp from each result and the spid from each result set, I can surely join the two results together. As you can see, that is exactly what I did. I had to format the spid and timestamp a little bit – but that is ok. With that little bit of magic, I get a result set similar to the following now.

Now, I can easily see who put the database into offline mode. I can also tell you what the statement was when they executed the offline. I also know what the timestamp was when the database was put offline. This is usable data now when it comes time for a post-mortem or fact finding request.

The Wrap

Is this the most eloquent method to trap this kind of data? Absolutely not, but it is usable. When it comes to something more sleek and modern, I would recommend a better tool such as Extended Events. I will show how to do this same task with greater ease and reliability through the use of XE in my next article.

Oh, if you are interested in any of my other articles about Auditing or Extended Events, I recommend you read start in here (for Auditing) or here (for XE).

Failed to Create the Audit File

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 31, 2017

One day while checking things for clients, I happened across a fun little error message – “SQL Server Audit failed to create the audit file“. It just so happens that the audit had been working and then suddenly stopped and started flooding the error logs with this message.

Why would it suddenly stop working? Well, it says in the error that the disk might be full or that there may be a permissions issue. So, at least there are some possibilities provided by the message. Granted – neither of these options is very settling for a DBA.

FAILED

While reading the first occurrence of this error message, you notice that the log is bombarded with 20 more messages of the same nature. Before you can even scroll to catch up with the error messages, another 30 have appeared. It looks like the server is starting to get busier with the business users starting to run through their daily routines. You need a fix and you need it quick. You copy and paste the error to another screen for reference and then close the log to remove that distraction. Here is a copy of that error message.

SQL Server Audit failed to create the audit file ‘C:\Database\XE\DBA_Server_Audit_906B13C3-8F3F-4CFC-A391-20C5F7CAD698.sqlaudit’. Make sure that the disk is not full and that the SQL Server service account has the required permissions to create and write to the file.

Let’s try the suggestions from the error message starting with security. The SQL Server Service account needs to have permissions to the directory where the audit is being stored. Looking in that directory, you can see that the audit was obviously successful at one point because there are audit files in the directory. In addition, the audit just barely stopped working and you are certain nothing has changed.

Regardless of that, you proceed to investigate the permissions settings on the directory. Looking at the directory permissions, you are able to fully confirm that the service account does indeed have adequate permissions to the folder.

So we can rule out the permissions having changed as being a viable contender for causing this problem. As you start to proceed to investigate the next option, you start to worry that users are being prevented from doing their jobs because of the flood of errors. For sanity sake, you run a few quick checks to verify things look normal on the server from an activity standpoint. You also check your ticket queues and find there is nothing alarming in there.

Phwew!

Still Trying

After checking the ticket queues and server activity, you bounce right back to your next check point – disk space. This is an easier check than the permissions. You have no mount points and you can verify the disk space with a quick glance in windows explorer. You look in windows explorer and can see that your C drive where the audits are being stored has 50% free space (or roughly 200GB).

Well, that is obviously not the problem either. You know the audit was working as recently as 30 minutes before you started troubleshooting and the errors did not start until almost immediately before you started checking the problem. What could it be? Afterall, you have 35 audit files in the audit folder for the trace. Then, suddenly, it hits you. There are 35 files. The trace was configured for 35 files with no rollover.

In this case, the easy fix is to move a bunch of files to an archive folder. As soon as that is done, a new message will appear in the error logs:

Message
Audit: Server Audit: 65536, State changed from: TARGET_CREATION_FAILED to: STARTED

Now to go change the audit process to make it a little more robust.

When dealing with SQL Audit, max_files is an important setting. Here is what msdn has to say about the setting.

MAX_FILES =integer

Specifies the maximum number of audit files that can be created. Does not rollover to the first file when the limit is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated will fail with an error.

This article is just one of several audit related articles on this blog. You can read more about some of the different ways to audit along with different perspectives from this link.

Wrap

Auditing is a necessity but it doesn’t need to be alarming or scary. Sometimes, we can become a little bit alarmed when an error occurs. We just need to keep our cool and trust our skills and abilities to troubleshoot in the event an audit fails. This article will hopefully show some of that process and help to provide a cool demeanor.

Audit Domain Group and User Permissions

Comments: 3 Comments
Published on: December 29, 2017

No matter how simple the task or how versed we are with doing a security audit, it seems like we can always stand to learn just a little bit more. No matter how many times we hand an audit report over to the auditor, there is always “just one” more report we have to provide.

I know it seems like I am almost always running some sort of audit report. Whether it is to audit which user may have changed something or what access a user may have overall to the system, there is always a need for more audit.

One of the aspects of an audit that I have not yet written about is with regards to Domain Groups. It is a very common practice to grant access to SQL Server via Domain Group membership. We can easily see which groups have which access, but how often do we try to figure out who is in the group?

When you do try to figure out who is in a Domain Group, do you do like so many DBAs that I know and yell across a cubicle wall to the AD guys and ask them to look it up? Maybe you fire off an email to the AD Admins and ask them to do the grunt work for you. Nobody will shame you for doing either of those two methods. I mean, most people probably toss it over the fence in one way or another.

Fence

 

It is not a difficult stretch to imagine asking somebody who is an expert at a technology to see if they could get a quick answer to your question for you. That said, it is also really nice when you can service those types of questions by yourself and thus save everybody a little time and effort. Luckily, there is at least one easy way for the SQL Server professional to try and answer questions about group membership.

Microsoft has included a method for the DBA to try and capture information about AD related accounts. This method comes in the form of an extended procedure called xp_logininfo. This procedure can tell me the members of domain groups, the access level the account has, and also validate if the account is still active in AD.

Suppose you needed to determine all of the accounts that may have access to SQL Server, via direct Login or via group membership. In addition, you must find out which of the accounts are no longer valid in AD. I have a script that will do just that by first perusing the groups and then cycling through each of the User accounts produced from the group perusal.

Let’s take a look at that script.

You will see here that I have a couple of cursors built into this script. This is helping me cycle through first the groups and then the users. As I cycle through each user, I am able to determine the validity of the account. Not only will I know if an AD account is dead and needs to be removed from SQL Server, but I will also know if a Domain Group no longer exists.

This little script has proven to be a major time saver with some clients. Not only has it saved me some time, but it has also helped to figure out how a developer was able to perform sysadmin functions on a prod box even though his account did not have sysadmin access granted explicitly.

Audit All Select Statements

audit_selectLegislation and regulation sometimes dictates that certain activities must be tracked within a database. On occasion, it will be required that queries be audited and tracked to reach compliance with the legislation or regulation. To achieve this compliance, this article will demonstrate how to use Extended Events to audit statements being executed within the database.

Over the course of a few recent articles, I have included little hints here and there alluding to this article. In fact, now would be a good time to review one of these articles in preparation for what I will be sharing today. Take a moment to refresh your memory on this article about finding the right event – here.

Backstory

I have to be a little honest here. Prior to somebody asking how they could possibly achieve a statement audit via extended events, I had not considered it as a tool for the job. I would have relied on Audit (which is Extended Event related), or some home grown set of triggers. In this particular request, Audit was not fulfilling the want and custom triggers was not an option. Another option might have included the purchase of third party software but there are times when budget does not allow for nice expensive shiny software.

So, with a little prodding, I hopped into the metadata and poked around a bit to see what I could come up with to achieve this low-budget audit solution.

Low-Budget Audit

Using the handy scripts I have shown previously (and that I just linked to), I was able to explore the Extended Events metadata and find just what may work properly. The requirements in this case were that it needed to be done in XEvents and that it must capture SELECT statements. To find the events that seemed plausible for this task, I plugged the term “SELECT” into my queries as follows:

From this query, there was really only one event that made any sense for my task – “degree_of_parallelism”. There was another event returned in the result set, but it made no sense to me as a possible candidate event for auditing select statements (it was ucs_transmitter_reclassify). From the results, I then took the keyword associated to degree_of_parallelism to see what else might be pertinent. I plugged that keyword “execution” into the following query:

The results of the previous query will be quite a bit more substantial. Some make sense to include in the audit and some do not. I opted to not included any of the events to keep things as clean and simple as possible.

More Requirements

I now have the base events covered that I want to use for my audit. That said, my base requirements are just not extensive enough. When I audit something, I really want to know who did it, when it was done, where it originated and what was it that they did. The degree_of_parallelism event will capture all of the select statements but it does not meet these additional requirements.

In order to meet the additional requirements, I will attach a handful of actions to the degree_of_parallelism event. The addition of the actions will provide all of the data I want and need. Combine the event and actions together, I came up with the following session definition.

This is a very simple session to pull together. I have added a few things in on my predicate to help limit the scope of the session to the AdventureWorks2014 database and to make sure I am not trapping events related to the code completion tools shown. Notice here also that there is a potential chance to optimize this session. Can you find it? Hint: It may be in the predicate. Second hint: re-read this article about predicates.

Now the trick to what makes this work to audit only the select statements is right there in the predicate. I have instructed the session to ignore any statement_type that is not a value of 1. As it turns out, statement_type of 1 is a select statement. To see how these values map, here is a quick query and the correlated results.

dop_statementtype

Based on these values, should you want to audit a different query type, just change the predicate to use the map_key value that corresponds to the desired statement type.

Testing

Testing this session is rather simple. Here is a quick and dirty script to help test it.

Conclusion

Building low cost solutions is a common requirement for the data professional. When given the opportunity, try things out to see what you can build to provide the solution. In this case, I have demonstrated how Extended Events, however unlikely a candidate, can provide a working solution to help audit any select statements occurring within your database.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

Auditing Needs Reporting

Comments: No Comments
Published on: October 13, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Sebastian Meine (blog / twitter), and the topic that he wants us to write about is: “Strategies for managing an enterprise”. Specifically, Sebastian has requested that everybody contribute articles about auditing. Auditing doesn’t have to be just “another boring topic”, rather it can be interesting and there is a lot to auditing.

For me, just like I did last month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series. Since I have auditing scheduled for later in the series, I was hoping to find something that meets both the XE topic and the topic of Auditing.

audit_wordcloudNo matter the mechanism used to capture the data to fulfill the “investigation” phase of the audit, if the data is not analyzed and reports generated, then the audit did not happen. With that in mind, I settled on a quick intro in how to get the audit data in order to generate reports.

Reporting

An audit can cover just about any concept, phase, action within a database. If you want to monitor and track performance and decide to store various performance metrics, that is an audit for all intents and purposes. If you are more interested in tracking the access patterns and sources of the sa login, the trapping and storing of that data would also be an audit. The data is different between the two, but the base concept boils down to the same thing. Data concerning the operations or interactions within the system is being trapped and recorded somewhere.

That said, it would be an incomplete audit if all that is done is to trap the data. If the data is never reviewed, how can one be certain the requirements are being met for that particular data trapping exercise? In other words, unless the data is analysed and some sort of report is generated from the exercise it is pretty fruitless and just a waste of resources.

There is a plenitude of means to capture data to create an audit. Some of those means were mentioned on Sebastian’s invite to the blog party. I want to focus on just two of those means because of how closely they are related – SQL Server Audits and Extended Events. And as I previously stated, I really only want to get into the how behind getting to the audit data. Once the data is able to be retrieved, then generating a report is only bound by the imagination of the intended consumer of the report.

SQL Server Audits

Audits from within SQL Server was a feature introduced at the same time as Extended Events (with SQL Server 2008). In addition to being released at the same time, some of the metadata is recorded with the XEvents metadata. Even some of the terminology is the same. When looking deep down into it, one can even find all of the targets for Audits listed within the XEvents objects.

Speaking of Targets, looking at the documentation for audits, one will see this about the Targets:

The results of an audit are sent to a target, which can be a file, the Windows Security event log, or the Windows Application event log. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records.

That doesn’t look terribly different from what we have seen with XEvents thus far. Well, except for the addition of the Security and Application Event Logs. But the Target concept is well within reason and what we have become accustomed to seeing.

If the audit data is being written out to one of the event logs, it would be reasonable to expect that one knows how to find and read them. The focus today will be on the file target. I’m going to focus strictly on that with some very basic examples here.

I happen to have an Audit running on my SQL Server instance currently. I am not going to dive into how to create the audit. Suffice it to say the audit name in this case is “TSQLTuesday_Audit”. This audit is being written out to a file with rollover. In order for me to access the data in the audit file(s), I need to employ the use of a function (which is strikingly similar to the function used to read XE file targets) called fn_get_audit_file. The name is very simple and task oriented – making it pretty easy to remember.

Using the audit I mentioned and this function, I would get a query such as the following to read that data. Oh, and the audit in question is set to track the LOGIN_CHANGE_PASSWORD_GROUP event.

There are some tweaks that can be made to this, but I will defer to the 60 day XE series where I cover some of the tweaks that could/should be made to the basic form of the query when reading event files / audit files.

XE Audits

Well, truth be told, this one is a bit of trickery. Just as I mentioned in the preceding paragraph, I am going to defer to the 60 day series. In that series I cover in detail how to read the data from the XE file target. Suffice it to say, the method for reading the XE file target is very similar to the one just shown for reading an Audit file. In the case of XEvents, the function name is sys.fn_xe_file_target_read_file.

Capturing data to track performance, access patterns, policy adherence, or other processes is insufficient for an audit by itself. No audit is complete unless data analysis and reporting is attached to the audit. In this article, I introduced how to get to this data which will lead you down the path to creating fantastic reports.

Audit who Dropped the Database

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

«page 1 of 7

Calendar
July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Monday, July 15, 2019