IP and Default Trace…T-SQL Tuesday #005

Comments: 9 Comments
Published on: April 6, 2010

As chance would have it, I had been checking Adam’s blog daily for the last few days to find the next T-SQL Tuesday.  Not having seen it, I started working on my next post yesterday evening.  The fortunate thing is that the post I was working on fits well with this months topic.  T-SQL Tuesday #005 is being hosted by Aaron Nelson.  And now I have one more blog to bookmark since I didn’t have his bookmarked previously.

This month the topic is Reporting.  Reporting is a pretty wide open topic but not nearly as wide open as the previous two months.  Nonetheless, I think the topic should be handled pretty easily by most of the participants.  My post deals with reporting on User connections.  I have to admit that this one really just fell into my lap as I was helping somebody else.

The Story

Recently I came across a question on how to find the IP address of a connection.  From experience I had a couple of preconceptions on what could be done to find this information.  SQL server gives us a few avenues to be able to find that data.  There are also some good reasons to try and track this data as well.  However, the context of the question was different than I had envisioned it.  The person already knew how to gain the IP address and was already well onto their way with a well formed query.  All that was needed was just the final piece of the puzzle.  This final piece was to integrate sys.dm_exec_connections with the existing query which pulls information from trace files that exist on the server.

After exploring the query a bit, it also became evident that other pertinent information could prove quite useful in a single result set.  The trace can be used to pull back plenty of useful information depending on the needs.  Here is a query that you can use to explore that information and determine for yourself the pertinent information for your requirements, or if the information you seek is even attainable through this method.

[codesyntax lang=”tsql”]


Now, I must explain that there is a problem with joining the trace files to the DMVs.  The sys.dm_exec_connections maintains current connection information as does the sys.dm_exec_sessions DMV.  Thus mapping the trace file to these DMV’s could be problematic if looking for historical information.  So now the conundrum is really how to make this work.  So that the data returned in the report, some additional constraints would have to be placed on the query.  But let’s first evaluate the first go around with this requirement.

Query Attempts

[codesyntax lang=”tsql”]


While evaluating this query, one may spot the obvious problem.  If not seen at this point, a quick execution of the query will divulge the problem.  SPIDs are not unique, they are re-used.  Thus when querying historical information against current running information, one is going to get inaccurate results.  Essentially, for this requirement we have no better certain knowledge what the IP Address would be for those connections showing up in the trace files.  The IP Addresses of the current connections will cross populate and render inaccurate results from the historical information.

My next step gets us a little closer.  I decided to include more limiting information in the Join to the sys.dm_exec_connections view.  The way I determined to do this was that I needed to also include the loginname as a condition.  Thus in order to get that, I need to include sys.dm_exec_sessions in my query.  To make it all come together, I pulled that information into a CTE.  Here is the new version.

[codesyntax lang=”tsql”]


The information pulled back is much cleaner now.  But wait, this mostly reflects current connections or connections from the same person who happens to have the same SPID as a previous time that person connected.  Yes, an inherent problem with combining historical information to the current connection information in the server.


My recommendation to solve this need for capturing IP address information along with the person who connected, their computer hostname, and the time that they connected is to do so pre-emptively.  (I know this diverges from the report for a minute, but is necessary to setup for the report).  A solution I have implemented in the past is to use a logon trigger that records the pertinent information to a Table.


[codesyntax lang=”tsql” title=”AuditLogonViolation Table”]


[codesyntax lang=”tsql” title=”AuditLogonEvent Table”]



[codesyntax lang=”tsql” title=”LogonTrigger”]


This solution gets the IP Address from the ClientHost Field of EventData() which is where the logon occurred.  That should be enough to trace back to the source of the connection.  Also note that the application was not included in this, that is information that could be gained from the trace files.  What this does is create a structure for logging logon events as well as prevent logons from unauthorized sources using an account that has been compromised (while logging that action as well).  With this, I now have a solution that complements the trace file.  I could query both and create a more accurate report of the logon activity that has occurred historically (depending on retention policy etc.).  This last method is somewhat limited by the EventData function and the xml schema that comes with it.  You can check it out here.


So here are a couple of parting solutions that would be able to give us the relevant data for tracking these historical logons.

[codesyntax lang=”tsql”]


The above is a simple query to pull back the information from both the violations and successful logon audit tables.

The following is a solution for combining the previous requirement (as I have translated it to truly correlate to Host rather than IP).

[codesyntax lang=”tsql” title=”Combining Audit with Trace”]


The last such report that I wish to share related to this topic is to find current activity.  Since the original idea was to combine the trace with the DMV, I decided that I needed to at least look at the DMV for a relevant query.  These DMV’s provide current activity and need to be combined to provide a quick snapshot of the activity that is occuring on the server at a given moment.  One quick snapshot can tell me who is logged in, from what IP Address, the application, read and write activity, and what the SQL is that is being executed.

[codesyntax lang=”tsql” title=”SP_who2005″]


There are several queries out there similar to this last one.  The idea is to capture a quick glimpse based on the DMVs which in essence are a report of a subset of server relevant information.


When it is necessary to provide reports on activity occurring on the server, it pays to do a little prep work.  Be Prepared.  It is not an easy task to be able to go back in time and report on data that isn’t captured.  The little prep work that one may need to do is well worth the effort in the end.

Edit:  Corrected 1 detail and fixed some formatting.

Common Tempdb Trace Flags – Back to Basics

Once in a while I come across something that sounds fun or interesting and decide to dive a little deeper into it. That happened to me recently and caused me to preempt my scheduled post and work on writing up something entirely different. Why? Because this seemed like fun and useful.

So what is it I am yammering on about that was fun?

I think we can probably concede that there are some best practices flying around in regards to the configuration of tempdb. One of those best practices is in regards to two trace flags within SQL Server. These trace flags are 1117 and 1118. Here is a little bit of background on the trace flags and what they do.

A caveat I have now for the use of trace flags is that I err on the same side as Kendra (author of the article just mentioned). I don’t generally like to enable trace flags unless it is very warranted for a very specific condition. As Kendra mentions, TF 1117 will impact more than just the tempdb data files. So use that one with caution.

Ancient Artifacts

With the release of SQL Server 2016, these trace flags were rumored to be a thing of the past and hence completely unnecessary. That is partially true. The trace flag is unneeded and SQL 2016 does have some different behaviors, but does that mean you have to do nothing to get the benefits of these Trace Flags as implemented in 2016?

As it turns out, these trace flags no longer do what they did in previous editions. SQL Server now pretty much has it baked into the product. Buuuuut, do you have to do anything slightly different to make it work? This was something I came across while reading this post and wanted to double check everything. After all, I was also under the belief that it was automatically enabled. So let’s create a script that checks these things for me.

Holy cannoli batman – that is more than a simple script, right? Well, it may be a bit of overkill. I wanted it to work for version before and after and including SQL Server 2016 (when these sweeping changes went into effect). You see, I am checking for versions where the TF was required to make the change and also for versions after the change where the TF has no effect. In 2016 and later, these settings are database scoped and the TF is unnecessary.

The database scoped settings can actually be queried in 2016 more specifically with the following query.

In this query, I am able to determine if mixed_page_allocations and if is_autogrow_all_files are enabled. These settings can be retrieved from sys.databases and sys.filegroups respectively. If I run this query on a server where the defaults were accepted during the install, I would see something like the following.

You can see here, the default settings on my install show something different than the reported behavior. While autogrow all files is enabled, mixed_page_allocations is disabled. This matches what we expect to see by enabling the Trace Flags 1117 and 1118 – for the tempdb database at least. If I look at a user database, I will find that mixed pages is disabled by default still but that autogrow_all_files is also disabled.

In this case, you may or may not want a user database to have all data files grow at the same time. That is a great change to have implemented in SQL Server with SQL 2016. Should you choose to enable it, you can do so on a database by database basis.

As for the trace flags? My query checks to see if maybe you enabled them on your instance or if you don’t have them enabled for the older versions of SQL Server. Then the script generates the appropriate action scripts and allows you to determine if you want to run the generated script or not. And since we are changing trace flags (potentially) I recommend that you also look at this article of mine that discusses how to audit the changing of trace flags. And since that is an XEvent based article, I recommend freshening up on XEvents with this series too!

The Wrap

In this article I have introduced you to some basics in regards to default behaviors and settings in tempdb along with some best practices. It is advisable to investigate from time to time some of these recommendations and confirm what we are really being told so we can avoid confusion and mis-interpretation.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Changing Default Logs Directory – Back to Basics

Every now and then I find a topic that seems to fit perfectly into the mold of the theme of “Back to Basics”. A couple of years ago, there was a challenge to write a series of posts about basic concepts. Some of my articles in that series can be found here.

Today, my topic to discuss is in regards to altering the default logs directory location. Some may say this is no big deal and you can just use the default location used during install. Fair enough, there may not be massive need to change that location.

Maybe, just maybe, there is an overarching need to change this default. Maybe you have multiple versions of SQL Server in the enterprise and just want a consistent folder to access across all servers so you don’t have to think too much. Or possibly, you want to copy the logs from multiple servers to a common location on a central server and don’t want to have to code for a different directory on each server.

The list of reasons can go on and I am certain I would not be able to list all of the really good reasons to change this particular default. Suffice it to say, there are some really good requirements out there (and probably some really bad ones too) that mandate the changing of the default logs directory to a new standardized location.


The logs that I am referring to are not the transaction logs for the databases – oh no no no! Rather, I am referring to the error logs, the mini dumps, and the many other logs that may fall into the traditional “logs” folder during the SQL Server install. Let’s take a peek at a default log directory after the install is complete.

I picked a demo server that has a crap load of stuff available (and yeah not so fresh after install) but where the installation placed the logs by default. You can see I have traces, default XE files, some SQL logs, and some dump files. There is plenty going on with this server. A very fresh install would have similar files but not quite as many.

If I want to change the Log directory, it is a pretty easy change but it does require a service restart.

In SQL Server Configuration Manager, navigate to services then to “SQL Server Service”. Right click that service and select properties. From properties, you will need to select the “Startup Parameters” tab. Select the parameter with the “-e” and errorlog in the path. Then you can modify the path to something more appropriate for your needs and then simply click the update button. After doing that, click the ok button and bounce the SQL Service.

After you successfully bounce the service, you can confirm that the error logs have been migrated to the correct folder with a simple check. Note that this change impacts the errorlogs, the default Extended Events logging directory, the default trace directory, the dumps directory and many other things.

See how easy that was? Did that move everything over for us? As it turns out, it did not. The old directory will continue to have the SQL Agent logs. We can see this with a check from the Agent log properties like the following.

To change this, I can execute a simple stored procedure in the msdb database and then bounce the sql agent service.

With the agent logs now writing to the directory verified after agent service restart as shown here.

At this point, all that will be left in the previous folder will be the files that were written prior to the folder changes and the service restarts.

The Wrap

In this article I have introduced you to an easy method to move the logs for SQL Server and the SQL Server Agent to a custom directory that better suits your enterprise needs. This concept is a basic building block for some upcoming articles – stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Default Sessions – Back to Basics

Comments: No Comments
Published on: September 1, 2016

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Default Sessions

SQL Server is full of some pretty cool stuff. Some of it is rather basic. Some of it is a bit complex. Whether basic or complex, there is a lot that seems to get overlooked because it just might be hidden from plain sight or maybe it is overlooked just because it is already done for you.

Today, I am going to talk about some of the stuff that is already done for you. It is pre-baked and ready to roll. Since it is pre-baked, it may be a good idea to get familiar with it and understand what kinds of data it may offer to you when troubleshooting.

defaultsessionIt is still somewhat surprising to me to discover how many people have never heard of the default trace in SQL Server. Slightly less surprising is how many people have never heard of the default Extended Events system_health session. What if I told you there were a bunch more default sessions than the default trace or system_health? I wrote about some of those default sessions previously.

With the evolution of SQL Server, there is an evolution in Extended Events. New features in SQL Server also means there may be some additional default sessions. I want to touch on some of these other default sessions in brief.

Default Sessions

Phone Home

First, lets start with an easy to find default session – telemetry_xevents. This session is a visible session and behaves like a user defined session with the caveat that it is a session installed with SQL Server 2016 and it is a system default session. I have another post queued up to discuss this session in a bit more detail so will just keep it brief here.

This session is there to basically capture specific data and phone it home to Microsoft. You can presume that based on the definition of the word telemetry

“is an automated communications process by which measurements and other data are collected at remote or inaccessible points and transmitted to receiving equipment for monitoring.”

Before you get your tin-foil hat in a crumple over sending data back to the mother-ship, understand that this data is not sensitive data. The type of data being gathered is usage metrics. Think of it in terms of gathering data about how many times a specific feature was used or how many times a group of errors occurred within the environment. This is in an effort to help improve the product.

While the session data is designed to be sent back to Microsoft, it could also be of use to the Enterprising DBA who is looking to get a better grasp of the environment or to troubleshoot various issues.

It’s a Stretch

With the release of SQL Server 2016 there is another default session that gets installed. This session is probably the most difficult to find. This is a private hidden session when it is installed. In addition, it is not present by default with SQL 2016 RTM but it was present by default with various CTP versions. The session is specific to the new stretch feature and is called rdaxesession.

Breaking down this particular session reveals pretty easily what feature the session is designed to support. The name rdaxesession breaks down to the following: rda = remote data archive, and then the rest is self explanatory. If I investigate the metadata for the session I will discover that the following events are traced in the session:


Well, that certainly clears it up! This session is all about the stretch feature. If you are on RTM and don’t use the stretch feature, then you won’t see this session even if you do query the metadata.

Don’t be a HADR

The last of the default sessions I want to cover both can be combined into a category about HA and DR – loosely at least. One of the sessions deals with backups and the second of the sessions deals with clusters.

When dealing with a cluster, it is important to know about the hidden trace file that records events related to the cluster. The session target data can be found in “{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170}PROGRAMFILES{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170}\Microsoft SQL Server\MSSQLxx.MSSQLSERVER\MSSQL\Log” by default and cannot be seen within SQL Server metadata at all (not that I have been able to find). For a few details about how this trace works, I recommend reading this article. This session can be called the SQLDiag session.

The session related to backups is actually broken down into two log files and is similar in nature to the SQLDiag session. This is true in that the session is hidden, is found in the same directory, is read in the same fashion, and is equally as hidden from within SQL Server as the SQLDiag session. The backup session deals specifically with managed backup.

What is managed backup? Well, it is a type of backup performed to Azure and which relies upon the “smart_admin” procedures, functions and process.

If you don’t perform managed backups or you don’t have a cluster configured, you will not need either of these sessions (and potentially not see the smart_admin files depending on SQL Server version). If you have either of these features, you will definitely want to ensure you become acquainted with them and how to retrieve the data.


There are plenty of mechanisms in place to help troubleshoot some of the more complex features of SQL Server. Getting to know the tools will prepare you for that moment when problems arise and you are under the gun.

I did not dive deep into any of these sessions holding that for later articles. I wanted to keep this article to an introductory level. If you are interested in the more detailed articles, please stay tuned for those upcoming articles.

If you are interested in reading more about Extended Events, I recommend reading my series which can be found here.

Extended Events Permissions

Comments: 1 Comment
Published on: December 25, 2017

A common question that I am asked about Extended Events relates to what kind of permissions are required in order to use this awesome feature. In this article, I am going to compare and contrast the permissions required for Profiler as well as Extended Events.

Extended Events changed a lot with how to monitor your server. One of those changes is a slightly different set of permissions to be able to perform various different tasks. Before I dive into what the security landscape is with XEvents, let’s take a peek at what we have to do in order to be able to use Profiler.


If you try to launch a profiler session as a standard login with what should be a typical user type account with restricted access, you will encounter something like the following.

Yes, according to the message you need to be a sysadmin (nobody reads past that first part so they always miss the alternative). So, let’s test things out. I will create a login called TraceUser and attempt to do tasks related to running a Profiler or Server Side type of trace. I will also use this same login when testing the permissions for Extended Events.

I included a quick test script after creating the login to confirm that the user does indeed have restricted access. The test script will produce the following message.

Msg 8189, Level 14, State 6, Line 8
You do not have permission to run ‘SYS.TRACES’.

This is the same net result as if I were to attempt to run a Profiler session – it’s just faster to get to the result. As we can see, the TraceUser login does not have the requisite permission to do anything with traces (profiler or server side).

Now, I will take this just a step further and try to add permissions to validate the minimum permissions needed.

Running this test script now will yield a result of all traces that are currently setup on the server. So, in my case that would be just a result showing the Default Trace and nothing else since I have not put any other traces on the server. I can also confirm the exact permissions granted to TraceUser do not actually exceed what I just granted.

Or is it?

After running that in the master database I will see that granting the “Alter Trace” permission has an extra side effect of granting “Showplan” to the user.


For Extended Events, we have something a little different and a little more (just a little) granular.

Wow! That is quite the permissions escalation, right? Well, it’s not significantly different in some regards to what we see with Profiler. Taking it a step further, XEvents is more in tune with a lot more internals for SQL Server than Profiler. Little things like looking at DMVs or getting the current status for things like Hekaton or Availability Groups, which tie deeper into the engine, will require more permissions. That said, view server state or even these permissions levels is not the end of the world. There are work arounds to the permissions issues. Before I give the work arounds, lets look at the permissions.

Using the TraceUser login I created earlier in this article, I am going to start testing XEvent permissions.

The first step I took in this script was to revoke the “Alter Trace” permission. Then I queried both sys.traces and sys.dm_xe_objects along with running my permissions checks. The attempt to query the two views produces the following messages.

Msg 8189, Level 14, State 6, Line 34
You do not have permission to run ‘SYS.TRACES’.
Msg 300, Level 14, State 1, Line 36
VIEW SERVER STATE permission was denied on object ‘server’, database ‘master’.
Msg 297, Level 16, State 1, Line 36
The user does not have permission to perform this action.

From the permissions checks, I can also confirm that this login has barely the ability to connect to SQL Server.

I will now grant permissions little by little to see what happens.

Here is the sum of the differences.

The “Alter ..Event Session” permissions should be expected since that is what I explicitly added. That said, the permission, while enough to create an event session does not yield enough permissions to do anything with viewing the session data or looking into the dmvs or catalog views. In fact, if you have a recent enough version of SSMS, the XE Profiler feature will cause SSMS to crash if you try to “launch” a session via the Profiler GUI with only the “Alter Any Event” permission.

That said, I can run the following script to create a new Event Session.

As we can see, the permissions are adequate to create a session via TSQL and it does not cause SSMS to crash. This would be another consideration if using the XE Profiler (and even if you are going to allow non DBAs to create XE Traces) in addition to what I noted recently in this article – here.  Truth be told, the traditional GUI for XE also will not allow users to access the Extended Events node if they have “Alter Any Event” or lower permissions – you must use TSQL to alter sessions. The caveat being that only the XE Profiler feature causes SSMS to crash (I tried it 6 times in a row with consistent results).

So, if “Alter Any Event” is only going to get us partially there, let’s go ahead and grant the view server state permission.

Due to the nature of sys.traces, I still do not have permissions to view that data. So that is a good thing. On the other hand, I can now query all of my XE related views (you can read more about those in my 60 day series). And I now can see that I have the following permissions.

And now I can do all sorts of fun stuff from the XE GUI.

That basically lays out the required permissions for both Profiler and Extended Events. In either case, I view both of these as tools for the trusted DBA. Outside of the DBA team, there are select other individuals who may think they need the access and then there are others who really could use the access to the tools but should not really be granted full sysadmin access. At times, you may not even want those individuals to have anywhere near “view server state” access because there is a chance that individual could see something they are not permitted to see. What do we do with that?

Work Arounds

This is when we figure out a way to circumvent the restrictions. Don’t view this as a hack to the system or a way to violate trust. Rather these are methods to help think outside the box and work together as a cohesive team.

Method 1 is a solid solution. That said, it does have a high management overhead. Aaron Bertrand has offered a solution and it is a secure solution. Aaron has found a way to work with Trustworthy databases, views, and logins to help provide a workable solution while still securing the event session data. You can read his article here.

Method 2 is also a secure solution and does require a bit more communication but less management and upkeep than method 1. I like to save the event data to file. Working with the person that would like to be able to review the data, I can create an appropriate XE Session that will be highly unlikely to capture anything they should not see. In addition, I do not need to grant that person any elevated permission on the production server. Once the trace is complete, I can copy the event files to a dev server where that person has elevated access. Then they can merge the XEL files (I show how to do that in the 60 day series) and review the data at their leisure.

Method 3 is similar to the previous method but it requires more work similarly to method 1. This last method, I can grant the user specific access to read data from a specific table in a specific database. I then setup an automated process to dump the data into that table or I can manually load it for the user. Again, the permissions here are very limited in scope and the person has nothing more than the ability to Select from a pre-defined table in a pre-defined database.

I don’t see the permissions for Extended Events as a limitation, rather they seem appropriate based on what XE does and the extensive nature of the tool. I recommend you work with XE for a bit and see what you can come up with to help improve your efficiency and the efficiency of our team.



Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Index Cannot Be Reorganized…

sunburst_spaceWorking diligently as any good DBA might, you have established maintenance routines for each of the SQL Servers under your purview.

As a part of this maintenance you have scripted solutions to intelligently manage and maintain the fragmentation levels for each of the indexes within each database on each instance.

To further show how diligent you are as a DBA, the outcomes of each maintenance run are logged and you review the logs each morning. This routine helps keep you on top of everything that is happening within the environment.

For months, maybe even years, things are running smoothly. Never a failure. Never an error. Just routine log review day after day. Then one day it happens – there is an error. The index maintenance script failed one night.

Index Cannot be Reorganized…

You receive the error message similar to the following:

Msg 2552, Level 16, State 1, Line 1 The index “blah” (partition 1) on table “blah_blah_blah” cannot be reorganized because page level locking is disabled

Immediately, you start double-checking yourself and verifying that it worked the previous night. You even go so far as to confirm that the same index was previously reorganized. How is it possible that it is failing now on this index. What has changed? Has something changed?

Time for a little digging and investigating, so the dirty work begins. On one side of the coin you are relieved to be able to do something different. On the other side of that coin, you are rather annoyed that something seems to have changed. These feelings are perfectly normal!

First things first – you investigate the indexes in question to confirm what the error is saying. This is easily done with a query such as the following:

Scrolling through the results, you notice (eventually) that the IX_SpecialOfferProduct_ProductID in the AdventureWorks2014 database has page locks disabled. You are absolutely certain that this index was created allowing page locks. Pondering the problem for a moment, you recall having read about the default trace (there are several articles on the default trace – here) and the thought occurs to try and see if there is a breadcrumb there about the change. Using the query from that default trace article, a picture starts to unscramble. Here is that query reposted and a snippet of the results:


This is a great start. Not seen in the results is the timestamp showing when it was done – which was due solely to snipping. Also not shown is the text of the statement that was run for those three events. So close, yet so far away. This is not quite enough to have the smoking gun evidence to show Jason (me) that I did something wrong and unauthorized. What to do now?

All is not lost yet! Your stunning memory kicks in and you recall several articles about using Extended Events to audit server and database changes. Better yet, you recall that you deployed an XE session to the server where this error occurred. And yes, you are slightly embarrassed that you failed to remove the XE session after fiddling with it. We won’t tell anybody that you deployed a test XE session to a production server as if it were your sandbox. The session currently deployed is trapping all object changes unlike the following session that has it filtered down to just objects that are indexes.

You query the trace file with a query like this:

Wow! Look at the results! There is a smoking gun finally.


Two results happen to pin the root cause of the change squarely on my shoulders. Yup, Jason changed that index to disallow page locks. I highlighted three different areas of interest in the results. The yellow and green indicate the DDL phase. One row for the start of the statement, and another row for the commit of that statement. The red highlight shows me the exact change that was made to this index. This is all very good info!

What Now?

It really is great to have the smoking gun. If something is broke and it worked previously, it is essential to find the root cause. With a root cause under the belt, what needs to be done to fix the failure? That is a little bit easier that finding the root cause. That is, unless there is a technical reason for the index to no longer allow page locks (maybe that smoking gun is less of a smoking gun and more like baby spittle after all).

Here is how you fix the problem:

But, But, But…

The Extended Events session would be very noisy and capture every alter index statement, right? It should capture statements like the following, right?

The answer to that question is: Yes, Yes, Yes. These statements are all captured due to the use of the ALTER statement. Here are the results of the XE session with all of these scripts having been executed:


If you want to audit when the indexes are changing and how they are changing, this will do just the trick. If there are a ton of changes, then be prepared for a deluge of information.

All about the Change

Comments: 1 Comment
Published on: January 12, 2016

TSQL2sDayThe second Tuesday of January 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 74th edition of this monthly blog party. This month the host is Robert Davis (blog | twitter) and he has asked us to “Be the change”. Whether the inspiration for this topic is the new year and resolutions, or Ghandi (you must be the change), or CaddyShack (be the ball), we will be discussing “Change.”

Specifically, Robert requested that we discuss data changes and anything relating to data changes. Well, I am going to take that “anything” literally and stretch the definition of changing data just a bit. It will all make sense by the end (I hope).


Changes happen on a constant basis within a database. Data will more than likely be blackbox2changing. Yes, there are some exceptions to that, but the expectation that data is changing is not an unreal expectation.

Where that expectation becomes unwanted is when we start talking about the data that helps drive the configuration of the server. Ok, technically that is a setting or configuration option or a button, knob, whirlygig or thingamajig. Seldom do we really think about these settings as data. Think about it for a moment though. We can certainly derive some data about these changes (if these settings themselves are not actually data).

So, while you may call it settings changes, I will still be capturing data about the changes. Good? Good! Another term for this is auditing. And auditing applies to all levels including ETL processes and data changes etc. By that fortune, I just covered the topic again – tangentially.

How does one audit configuration changes? Well, there are a few different methods to do this. One could use a server side trace, SQL audit, Extended Events or (if somebody wants to) a custom solution not involving any of those using some sort of variation of tsql and error log monitoring. The point is, there are options. I have discussed a few options for the custom solution path as well as (recently published article using…) the default trace path. Today I will dive into what it looks like via SQL Audit.

When creating an audit to figure out what changes are occurring within the instance, one would need to utilize the SERVER_OPERATION_GROUP action audit group. This action group provides auditing of the following types of events:

  • Administer Bulk Operations
  • Alter Settings
  • Alter Resources
  • Authenticate
  • External Access
  • Alter Server State
  • Unsafe Assembly
  • Alter Connection
  • Alter Resource Governor
  • Use Any Workload Group
  • View Server State

From this group of events, we can guess at the types of actions that might trigger one of these events to fire for the audit. Some of the possible actions would be:

Action Example
Issue a bulk administration command BULK INSERT TestDB.dbo.Test1
FROM ‘c:\database\test1.txt’;
Issue an alter connection command KILL 66
Issue an alter resources command CREATE RESOURCE POOL PrimaryServerPool
Issue an alter server state command DBCC FREEPROCCACHE
Issue an alter server settings command Perform sp_configure with reconfigure
Issue a view server state command


FROM sys.dm_xe_session_targets

Issue an external access assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
Issue an unsafe assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
Issue an alter resource governor command ALTER RESOURCE GOVERNOR DISABLE
Authenticate see view server state vsst type occurs for auth events
Use any workload group See Resource Governor

This is quite a bit of interesting information. All of these events can be audited from the same audit group. The interesting ones of this bunch are the ones that indicate some sort of change has occurred. These happen to be all but the “Authenticate”, “View Server State” and “Use any workload Group” events even though these events may be stretched to say something has changed with them as well.

With all of that in mind, I find the the “alter server settings” event to be the most problematic. While it does truly capture that something changed, it does not completely reveal to me what was changed – just that a reconfigure occurred. If a server configuration has changed, I can capture the spid and that reconfigure statement – sure. Once that is captured, I now have to do something more to figure out what configuration was “reconfigured”. This is highly frustrating.

Here’s an example from the audit I created:


This is only a small snippit of the results. I can see who made the configuration change, the time, the spid, the source machine etc. I just miss that nugget that tells me the exact change that was made. At least that is the case with the changes made via sp_configure. There are fixes for that – as previously mentioned.

Here is another bit of a downside. If you have the default trace still running, a lot of this information will be trapped in that trace. Furthermore, some of the events may be duplicated via the object_altered event session (e.g. the resource governor events). What does this really mean? Extra tracing and a bit of extra overhead. It is something to consider. As for the extended events related events and how to do this sort of thing via XE, I will be exploring that further in a future post.

Suffice it to say that, while not a complete solution, the use of SQL Audit can be viable to track the changes that may be occurring within your SQL Server – from a settings point of view.

Alter Event Session

candycaneReady for a change of pace? Good! Today is a really good time to start looking into one of the more basic concepts related to Extended Event Session management.

Consider the occasional need to change a trace. Maybe you added the wrong event to the trace, or maybe you neglected to include the event in the first place. There is also the possibility that all of the events are correct but the actions or predicates may not be completely ideal for your trace session. What if it is just as simple as a missing target or adding another target to the session? All of these are potential changes to an XEvent session that might be needed.

Today, I will cover how easy it is to modify these sessions without dropping the entire session as many examples on the internet show. Each of the configuration changes just mentioned can be handled through some pretty simple scripts (and yes through the GUI as well).

Altered States


There is no getting around it. To demonstrate how to change event sessions, an event session must first exist. Let’s use the following as the demo session.

This session starts with a ring_buffer target. There is a single event in the session. This event has a few actions applied to it along with a predicate on the database_name field. I also have several of the settings for this session defined (whether at the custom field level or at the session level).

After deploying this session, I quickly realize that I flubbed it up a bit. I forgot a few things. Since those items are missing, I need to get them added without negatively impacting the already gathered data. Let’s start with the missing target. I don’t know how I missed it, but I completely forgot to add a file target to this session. The file target is a must have, so let’s get it added.

Phew, that problem has been easily rectified. Further review of the session shows the next problem. My manager requires that the dispatch latency be no more than 4 seconds and that the event memory not exceed 4090 kb. My manager is a little intense about some of these settings. To ensure I adhere to his requirements, I need to change my session settings now.

Good! We are rolling right along here. Fixed a couple of problems real quick with that session and I can continue on with other tasks for the day. Shortly after lunch the manager stops in and asks who changed various database settings. Looking into this session that was just deployed to try and capture some of those events, I get that sinking feeling when I realize that I completely missed that requirement. D’oh! That’s ok for now because I hadn’t turned off the default trace, but I better get the proper event added to the session.

The next time something changes with the database, it will now be captured. Sadly, too many things are crossing over the desk and I am trying to move too quickly. I forgot to enable the custom field to collect the database name, I better make that adjustment.

Nice, I have that adjustment made. Unfortunately this is where it does become a bit more complicated. Notice that I had to drop the event from the session first before making that change to the custom field setting for that event? This should be minimal in overall impact since it does not remove the already captured events from the session target. But it is an extra step that must be remembered when making a change that affects an Event within the session.

So far so good. I can capture the events that relate to a database change. I can figure out who made the change and the statement made for the change, right? Nope. I missed that during the setup. I guess I better add that in so I have better information for auditing the changes made to the database. I can start capturing that with the addition of the correct actions.

Everything is now settled in with that session. I go about my merry way for a couple of weeks. Then, one day, I find I need to review the logs to determine who has been changing database settings again. When looking at the log data I discover there are all sorts of object changes being logged to the event session log files. In review of the session definition I figure out the problem. I missed a predicate limiting the captured events to only those that are database type changes. I might as well get that added in.

This will now change the event session so the object_altered event will only capture “database” type events. These are the types of events that include settings changes at the database level.

Making changes to an event session without needing to drop and recreate the entire session is a pretty powerful tool. If I script that session as it is now configured, I would get the following:

Compare this to the starting session, and it is apparent that all of the changes implemented are now intact.

Altering an Extended Event session is not necessarily a difficult task, but it is a very useful ability. Having this basic skill in your wheelhouse can be very positive and useful for you!

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

Database Settings Changes – Red Handed

Comments: 4 Comments
Published on: July 8, 2015

One of my pet-peeves (and consequently frequent topic of discussion) is finding database settings (or any setting that has changed) without knowing about it. Worse yet is finding that the change has occurred and nobody claims to have any knowledge of it or having done the deed.

This happened again recently where a database was set to single_user and suddenly performance in the database tanked. Change the database back to multi_user and the performance issues are magically resolved.

Fortunately there is a means to combat this. Well, sort of. The default trace in SQL Server does capture the event that occurs when the database is set to single_user or read_only. Unfortunately, all that is captured is that an Alter Database occurred. There is no direct means of mapping that event to the statement or setting that changed.

This inadequacy got me to thinking. The default trace is looking at a set of specific “events”, why wouldn’t that set of events be available within Extended Events. It only seems logical! So I decided to query the event catalog and lo and behold, I found just the event I was seeking – object_altered. Combine this with a recently used predicate (object_type = ‘DATABASE’) and we are well on our way to having just the trap to catch the source of these database changes red-handed.

Easy enough to create this particular session. The event does not capture the “whodunnit” without a little extra prodding. So, I added in a couple of actions to get that information – sqlserver.nt_username,sqlserver.server_principal_name,sqlserver.client_hostname. Additionally, the event does not explicitly tell me what setting changed – just that some setting changed. For this, I decided to add the sql_text action so I could correlate event to the actual setting being changed. Then to cap it all off, I made sure the predicate specified that we only care about database settings changes as previously mentioned.

Running the session and then testing some settings changes should prove fruitful to capturing good info. Here are a few of the tests that I ran and the results of those tests (by querying the session data).

Now to try and look at the results.

DB Change Data


There you have it! I have just been caught red-handed changing my AdventureWorks2014 database to single_user and multi_user.


For more ideas on settings and changes and so forth, Andy Yun (blog | twitter) has invited all to participate in TSQL2SDAY on this very topic. He has invited all to talk about their experiences with “default settings” and what you might change them to! You can read about it here. I have another article coming up that will fit just nicely with that. Let’s just call this a preview and maybe it can help you get those tsql2sday juices flowing.

«page 1 of 3

March 2018
« Jan    

Welcome , today is Wednesday, March 21, 2018