Database Drops in SQL 2012

In the previous article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through the use of Extended Events.

What I forgot to mention is the method I shared was for SQL Server 2014. While the events do exist for SQL Server 2012, there is a minor modification that needs to be made in order to avoid the dreaded error message that says something like:

Msg 25713, Level 16, State 23, Line 1
The value specified for event attribute or predicate source, “object_type”, event, “object_created”, is invalid.

I am sure many of us would rather not have to deal with such a terrible thing as an error when we want to do something that should just work. Well, here is the fix for that error if you tried to deploy that XE Session to a previous version (such as 2012).

Do you see that change? Instead of using the map_value in 2012, one must use  the map_key. This was a good change in 2014 to allow us to use human friendly terms instead of needing to lookup the map_key for everything like in 2012.

In following the theme from the previous article, here is the rest of the setup for testing this XEvent session to see how it would trap that data and how to parse the data from the session.

Enjoy!

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.

SQL Cruise Mediterranean+

Comments: 2 Comments
Published on: June 24, 2015

This is a post that is looooong overdue. I have intended to write some of this at least since the first SQL Cruise I attended back in 2013 (in the Caribbean). Now with three Cruises under the belt and plenty of awesome experiences, it is high time to follow through on those intentions.

Official_SQLCruise_2015For those that have not heard, SQL Cruise is a mix of training and vacation. You get a little (or a lot) of SQL and then you get a little (or a lot) of vacation mixed with the opportunity to see places you may not have otherwise visited. Last but certainly not least is the opportunity afforded to all attendees to meet other people they may have never otherwise gotten a chance to meet. This last benefit may be the most important facet of SQL Cruise in that the networking done during the cruise will last a long time and can certainly open a few doors if/when necessary.

One great example of this networking occurred on the first cruise I attended. In the meetings outside of the scheduled training, one of the other cruisers (he is now an MCM and MVP) asked a pretty important question about an issue within his work environment. A performance monitoring package they ran for all of their clients was causing some serious problems. The problems affected about 30% of all of the servers which numbered over 1500. The application on the affected servers would stop responding and they would no longer receive metrics or alerts to conditions being raised.

This problem was significant enough that they engaged Microsoft and Microsoft engineers had been collecting metrics and logs for over six months (at the time). There was a series of try this and try that and all of it resulted in no change whatsoever. So this fellow cruiser brought the issue to the cruise with him. In chatting with the cruiser (oh, and he had to fly to the US from Europe in order to attend), I discovered that the symptoms he was enduring were quite similar to some things I had seen within SSIS packages for example. We discussed a quick fix which he took back to his employer.

Through a series of tests and deployments, this Cruiser and his employer rolled out the fix to all of the servers in the environment. This fix ended up saving them so much money in labor and other costs, that his employer sent him and his family (eight people in total) back to SQL Cruise in the Caribbean the following year. The ROI for this Cruiser and his employer was huge! Additionally, he and I have become friends thanks to the Cruise. This is the sort of stuff that defines SQL Cruise.

This year, we had the chance to repeat the Cruise by attending the Mediterranean version. This installment was a whirlwind of touring Europe. Some started in London before proceeding to Barcelona and then on to such places as Pompeii, Rome, Pisa, Cannes, Monaco, and Mallorica before returning to Barcelona to continue on to Berlin or London before returning home. For me, we chose to start in Paris and then taking the bullet train on to Barcelona.

Med Sunset

The tourism was fast and furious. The training was faster and more furious. And in the end, the European / Mediterranean trip was gone before we knew it. All who attended surely walked away with great memories and with having learned something (whether it be cultural, historic, or SQL in nature).

If you have the chance to attend a SQL Cruise, I would say do it. The training comes from the technical leads as well as the attendees in the rooms. Often, the tech lead will even defer to any of the other professionals in attendance. Especially in the case of Trace Flags as we learned this past week – in every session and in every office hours meeting. Just ask Grant Fritchey (blog | twitter) about it some time – he loves talking about Trace Flags.

Reading Extended Event File Session Data

Using Extended Events to trap/trace information allows the trapping of that information to various targets. One of the targets I will frequently tell people to use is the file target. The reasoning for this is the opportunity this affords to be able to review the output  at a later time whether the event session is running or not.

Along  with that recommendation, I also show some quick code to pull the data back from the session target (the file on disk). This code is written to dynamically pull the file information out of the XE Dynamic Management Views to make it a little easier on the user (and a bit less likely to have a typo). That code looks a bit like the following:

This works really well when the session is running. It pulls the desired file path and name back from the DMVs and one is able to parse the session data. But what if that session is stopped?

Well, if the session is stopped, we have a problem. With a stopped session, the session above will not produce any results. The short of this is that SQL Server removes the entries from the sys.dm_xe_session_targets and sys.dm_xe_sessions DMVs. Due to that, there is a bit of a tweak to be able to query those files for the sessions that have been stopped.

To query those files, one could rewrite the previous query to something like this:

Now, I bet you may be asking why stop an Extended Event session from running. Afterall, extended events is extremely light weight and has little impact on the server, right? True that may be, there is still a cost for traces. If you can just run a trace for a targeted time period, then why not limit the amount of tracing?

Looking at the code, you can see that I reverted to a less dynamic approach to read those event files. Instead of trying to get that info direct from the database, I am just telling SQL Server where to find the files on disk (note the filepath that is passed).

There you have it. A means to query these trace files whether the XE trace is running or stopped.

Execution Plans in Extended Events

Extended Events is a wonderful tool. Execution Plans are also some wonderful things – or are a wonderful tool as well. Both of these tools are fantastic for troubleshooting. Combined, they can potentially be even better.

Or, they could just cause some hair loss and be a fairly complicated pain in your butt. Hopefully the combination of the two will be more useful than painful for you. And today, I will discuss one of the pains that could be an issue when combining these two tools. I will also discuss some options for getting around this pain point between XE and Execution Plans.

Let’s start by taking a quick look at two Extended Events events that can expose execution plans for you. These two events are query_pre_execution_showplan and query_post_execution_showplan. Here are a couple sample sessions set to trap these events.

And…

With these sample sessions ready, we need a quick description of what is happening. I have built the sessions (at least the TrapEstExecPlans session) to go to both a file and the ringbuffer targets. This was done to test the behavior on both targets to ensure consistency in behavior between the different targets.

Also, I added a bunch of actions that are available to query, but the sample queries shown later will not use the data for those actions. This is only due to the examples I have chosen to share.

With those sessions created, go ahead and start them, then run a query like the following:

After  having executed this query, I can start taking a look at the data captured. I will only query the TrapEstExecPlans session in this post because the behavior between the two sessions and events is consistent.

Keep in mind that I talked about having both a ringbuffer and a file target for this session. I am only going to share the query from the ringbuffer target because the behavior from the memory target and the filetarget is the same.

If I take a look at the execution plans shared from the results of that query, I might see something like this:

Plan from XE

 

If I look at the execution plan at runtime for that same query, I would see something like this:

Exec Plan

Between the two plans, I have color coded the various attributes of the plan to pair the matching attributes and make it easier to spot the differences. In this case, the differences are in the first two nodes. The plan from Extended Events does not have the connection properties, but is also missing a few other important things like the StatementType and StatementText. Looking at the plan from XE in the gui, you would see something like this:

XE-plan

Instead of this:

Execplan_graphic

Those minor differences in the XML of the execution plans can lead to somewhat of a pain. Alas, there is a fix for that. And the fix comes down to doing one of a few things already available to us in the XE session data. We can either lookup the plan_handle that was trapped by the action, or we can lookup the QueryPlanHash that is available in the XML from the execution plan data that has been trapped. With that information, one can likely retrieve the stored execution plan and catch the rest of the missing components of that execution plan.

There is one other option and that is the action that traps the sql_text. The sql_text that is captured can lead us to understand what kind of plan (select, update, delete, etc) we are looking at from the XE session data.

So, while it is a bit of a nuisance that the two sources of execution plans does not produce the same plan, it is not the end of the world. There is still adequate information available from the XE session data to figure out the missing pieces of the execution plan.

Adventures with NOLOCK

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: June 15, 2015

Some of the beauty of being a database professional is the opportunity to deal with our friend NOLOCK.  For one reason or another this query directive (yes I am calling it a directive* and not a hint) is loved and idolized by vendors, applications, developers, and upper management alike.  The reasons for this vary from one place to the next, as I have found, but it always seems to boil down to the perception that it runs faster.

And yes, queries do sometimes run faster with this directive.  That is until they are found to be the head blocker or that they don’t run any faster because you can write good TSQL.  But we are not going to dive into those issues at this time.

A gem that I recently encountered with NOLOCK was rather welcome.  Not because of the inherent behavior or anomalies that can occur through the use of NOLOCK, but rather because of the discovery made while evaluating an execution plan.  Working with Microsoft SQL Server 2012 (SP1) – 11.0.3128.0 (X64) , I came across something that I would rather see more consistently.  Let’s take a look at this example execution plan:

NoLock

 

First is a look at the plan to see if you can see what I saw.

Read Uncommitted

 

And now, we can see it clear as day.  In this particular case, SQL Server decided to remind us that the use of this directive allows uncommitted reads to occur so it throws that directive into the query text of the execution plan as well.  This is awesome!  In short, it is a visual reminder that the use of the NOLOCK directive, while it may be fast at times, is a direct route to potentially bad results by telling the database engine to also read uncommitted data.

How cool is that?  Sadly, I could only reproduce it on this one version of SQL Server so far.  If you can reproduce that type of behavior, please share by posting to the comments which version and what you did.  For me, database settings and server settings had no effect on this behavior.  No trace flags were in use, so no effect there either.  One thing of note, in my testing, this did not happen when querying against a table direct but did only occur when querying against a view (complexity and settings produced no difference in effect for me).

* I would like to make it a goal for every database professional to call this a DIRECTIVE instead of a hint.  A hint implies that you may have a choice about the use of the option specified.  And while NOLOCK does not entirely eliminate locks in the queries, it is an ORDER to the optimizer to use it.  Therefor it is more of a directive than a mere suggestion.

Extended Events, Birkenstocks and SQL Server

TSQL Tuesday

I bring you yet another installment in the monthly meme called T-SQL Tuesday.  This is the 67th edition, and this time we have been given the opportunity to talk about something I really enjoy – Extended Events.

Props to Jes Borland (blog | twitter) for picking such an awesome topic. There is so much to cover with extended events, it’s like a green canvas ready for anything you can imagine.

Birkenstocks?

birkentstocks

I will save the explanation here for later when hopefully it all ties together for you (well, at least buckles up).

treehugger

While that is all fun and playful, let’s get down to the serious side now. One of my favorite quick fixes as a consultant is to come in and find that the server is set to “environment friendly” / “green” / “treehugger” mode. You can read more about power saving cpus from my friend Wayne Sheffield here.

That poor old cpu thing has been beat up pretty good. But how can we tell if the server is running in that mode if the only thing we can do is look in SQL Server (can’t install cpu-z, or don’t have adequate permissions on the server to see windows settings – just play along)? Lucky for us there is this cool thing called Extended Events.

In SQL Server we have this cool event called perfobject_processor. This particular event has some really cool metrics that it captures.  One such metric is the frequency. The frequency is an indicator to us whether the server has the cpu set to balanced, high performance, or power saver. Having that in mind, let’s create a session to trap this data and experiment a little with the cpu settings.

Well, that looks amazingly easy and straight forward. I am telling the session to trap the additional CPU information such as numa_node_id and cpu_id. You can eliminate those if you wish. They may be beneficial when trying to identify if there is an issue on a specific processor though.

To experiment, I will break out the age old argument provoker – xp_cmdshell. I will use that to cycle through each of the power saving settings and look at the results. Here is the bulk of the script all together.

And now for the XE Parser.

If I parse through the extended event after each change of the power scheme, I would be able to see the effect of each scheme change in the event session as well as in a tool such as Resource Monitor. Here is what I was able to see with each of the changes.

Balanced Saver

From Resource Monitor:

balanced_cpu

And the XE data:

balanced_results

This is my default power scheme. On my laptop, this is ok. For a production SQL server, this will cause problems.

High Performance

fullpower fullpower_results

Quickly, you should be able to spot that the blue line in the graph, and the numeric values from the XE session correlate to the processor giving you everything it has. This is good for SQL Server.

Power Saver

powersaver_cpu

See how that blue line falls off sharply?

 

powersaver_results

 

Supporting that steep fall in the graph, we can see that the XE trapped the percent of max frequency as 36%. You might be lucky and attain 36%. Don’t be surprised if you see something even lower. Please don’t use this setting on a production box – unless you want to go bald.

We can see that we have great tools via Extended Events to help troubleshoot various problems. As I said, this is one of my favorites because it is a very common problem and a very easy fix.

SQL Server is not GREEN! Do not put birkenstocks on the server and try to turn the server into a tree hugger. It just won’t work out that well. Set your fileservers or your print servers to be more power conscientious, but this is something that will not work well on SQL Server.

Final thought. If you have not figured out the birkenstocks, well it is a common stereotype with environmentalists in some areas that they may wear woolly socks and birkenstocks.

No wool socks were harmed in the making of this blog post!

Database In Recovery Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 18, 2015

Many moons ago, I published a post that contained a script to aid in easing the anxiety that comes when a database is “In Recovery”. When I pulled that script out to try and use it on a SQL 2012 box, I got a nasty error.  Thanks to that nasty error, I have updated the script to now work on SQL 2012 and SQL 2014.

If you would like to first read the previous post, I invite you to click this link.

Here is the version of the script that will work for SQL 2012 and 2014.

 

Monitoring SQL Server

TSQL2sDay150x150Welcome to the fabulous world of blog parties, SQL Server and what has been the longest running SQL Server related meme in the blogosphere – TSQLTuesday.

This month we are hosted by Catherine Wilhemsen (blog | twitter) from Norway. And interestingly, Catherine has asked for us to talk about monitoring SQL Server.  Wow! Talk about a HUGE topic to cover in such a short space. Well, let’s give it a go.

I am going to try and take this in a bit of a different direction, and we shall see if I have any success with it or not.

Direction the First

Monitoring is a pretty important piece of the database puzzle. Why? Well, because you want to try and find out before the end-users that something is happening. Or do you? It is a well established practice at many shops to allow the end-users to be the monitoring solution. How does this work, you ask?

It works, by waiting for an end-user to experience an error or some unexpected slowness. Then the user will either call you (the DBA), your manager, the company CEO, or (if you are lucky) the helpdesk. Then, the user will impatiently wait for you to try and figure out what the problem is.

The pros to this solution involve a much lower cost to implementation.  The cons, well we won’t talk about that because I am trying to sell you on this idea. No, in all seriousness, the con to this approach could involve a lot of dissatisfaction, job loss, outages, delays in processing, delays in paychecks, dizziness, fainting, shortness of breath, brain tumors, and rectal bleeding.  Oh wait, those last few are more closely related to trial medications for <insert ailment here>.

If you are inclined to pursue this type of monitoring – may all the hope, prayers, faith and luck be on your side that problems do not occur.

New Direction

This methodology is also rather cheap to implementation.  The risk is relatively high as well and I have indeed seen this implementation. In this new approach, we will require that the DBA eyeball monitor the databases all day and all night.

At the DBA’s disposal is whatever is currently available in SQL Server to perform the monitoring.  It is preferred that only Activity Monitor and Profiler be used to perform these duties. However, the use of sp_who2 and the DMVs is acceptable for this type of duty.

The upside to this is that you do not incur any additional cost for monitoring over what has been allocated for the salary of the DBA. This an easy and quick implementation and requires little knowledge transfer or ability.

The downside here is – well – look at the problems from the last section and then add the glassed over stoner look of the 80s from staring at the monitor all day.

If you have not had the opportunity to use this type of monitoring – consider how lucky you are.  This has been mandated by several companies (yes I have witnessed that mandate).

Pick your Poison

Now we come to a multi-forked path.  Every path at this level leads to a different tool set.  All of these tools bare different costs and different levels of knowledge.

The pro here is that these come with lower risk to those suspicious symptoms from the previous two options. The con is that it will require a little bit more grey matter to configure and implement.

You can do anything you would like at this level so long as it involves automation.  You should configure alerts, you should establish baselines, you should establish some level of history for what has been monitored and discovered. My recommendation here is to know your data and your environment and then to create scripts to cover your bases.

One last thought, no matter what solution you decide to implement, you should also monitor the monitor. If the DBA collapses from long hours of eyeball monitoring, who will be there to pick him/her up to resume the monitoring?

If you opt to not implement any of these options, or if you opt to implement either of the first two options, I hope you have dusted off your resume!

Extended Events and Data Types

Comments: No Comments
Published on: April 14, 2015

TSQL2sDay150x150

Today is another one of those installments in the long-running TSQL Party held monthly (a.k.a TSQL2SDAY).

This month we have an open invitation from Mike Donnelly (blog | twitter), asking us to talk about something new we have learned and then to teach about it. You can read the invitation in Mike’s own words on his blog.

Coincidentally, the topic is both pretty straight forward and easy on the one hand while somewhat difficult on the other hand.  Mike said: “The topic this month is straight forward, but very open ended. You must learn something new and then write a blog post explaining it.” I find the topic to be difficult because I will usually blog about a topic when I have learned something new. On the other hand, sharing new stuff  is pretty straight forward and enjoyable.  Enter the brain split!

So, what I have learned recently?

Quite a bit.  But what would I really like to share on this occasion?

For today, I would like to share more information about extended events.  XEvents are great.  There is a wealth of information to be garnered from XEvents.  Furthermore, XEvents provide a great opportunity to keep learning.

While researching some events to help troubleshoot a specific issue, it dawned on me that there was some info that I had only looked at when I was looking at specific events.  I started wondering how much of that info was out there.  So here I will share some of this information that is available to you via queries within SQL Server.  Much of this info is attainable through the re-purposing of some scripts I shared previously – here.

Custom Data Types

This wasn’t too much of a surprise because I had seen them throughout and taken advantage of the custom data types to get better information.  But I might consider these custom data types to be more of the EAV model coming through than custom data types.  One can expose the custom data types through an evaluation of data in the map_values DMV.  Let’s take a look at a script that would lay the groundwork to see these data types.

Evaluating this data, one will see that in addition to the “standard” datatypes such as integer, there will be a “wait_types” data type.  This data type will map to all of the wait types available through extended events.  Additionally, the event that is associated to each of these custom data types is exposed through this query.  When getting ready to use an extended event, knowing the kinds of data that will be exposed through a data point in the session will make the session data more usable.  Knowing there is a custom data type (yes, it is really just a key value pair), can be extremely helpful.

Collection Flags

Many of the available events have “customizable” collection flags exposed.  Understanding that these collection flags can be on or off is essential to saving some hair.  Not all data is automatically collected for all events.  Some prime examples of such events that do not automatically collect certain pieces of data are sp_statement_completed and object_created.  The nice thing about these flags is that they have a value of “customizable” in the column_type field.  Another good thing with these flags is that the description field gives a little documentation on what the behavior should be for the “on” and “off” states.

There is a good reason that some of those may be off by default.  The addition of this information may cause an additional load or may be information overload.  It is up to the consumer to determine if the data is going to be of significant importance.  Once determined, enable or disable the flag as appropriate.

These queries provide a good amount of information about the extent of custom data types as well as the collection flags that may be available to use when creating event sessions in SQL Server.  Understanding that this data and these options are there is important to capturing better event info.

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 7 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

«page 1 of 45






Calendar
July 2015
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Saturday, July 4, 2015