Database In Recovery Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: May 18, 2015

Many moons ago, I published a post that contained a script to aid in easing the anxiety that comes when a database is “In Recovery”. When I pulled that script out to try and use it on a SQL 2012 box, I got a nasty error.  Thanks to that nasty error, I have updated the script to now work on SQL 2012 and SQL 2014.

If you would like to first read the previous post, I invite you to click this link.

Here is the version of the script that will work for SQL 2012 and 2014.

 

Monitoring SQL Server

TSQL2sDay150x150Welcome to the fabulous world of blog parties, SQL Server and what has been the longest running SQL Server related meme in the blogosphere – TSQLTuesday.

This month we are hosted by Catherine Wilhemsen (blog | twitter) from Norway. And interestingly, Catherine has asked for us to talk about monitoring SQL Server.  Wow! Talk about a HUGE topic to cover in such a short space. Well, let’s give it a go.

I am going to try and take this in a bit of a different direction, and we shall see if I have any success with it or not.

Direction the First

Monitoring is a pretty important piece of the database puzzle. Why? Well, because you want to try and find out before the end-users that something is happening. Or do you? It is a well established practice at many shops to allow the end-users to be the monitoring solution. How does this work, you ask?

It works, by waiting for an end-user to experience an error or some unexpected slowness. Then the user will either call you (the DBA), your manager, the company CEO, or (if you are lucky) the helpdesk. Then, the user will impatiently wait for you to try and figure out what the problem is.

The pros to this solution involve a much lower cost to implementation.  The cons, well we won’t talk about that because I am trying to sell you on this idea. No, in all seriousness, the con to this approach could involve a lot of dissatisfaction, job loss, outages, delays in processing, delays in paychecks, dizziness, fainting, shortness of breath, brain tumors, and rectal bleeding.  Oh wait, those last few are more closely related to trial medications for <insert ailment here>.

If you are inclined to pursue this type of monitoring – may all the hope, prayers, faith and luck be on your side that problems do not occur.

New Direction

This methodology is also rather cheap to implementation.  The risk is relatively high as well and I have indeed seen this implementation. In this new approach, we will require that the DBA eyeball monitor the databases all day and all night.

At the DBA’s disposal is whatever is currently available in SQL Server to perform the monitoring.  It is preferred that only Activity Monitor and Profiler be used to perform these duties. However, the use of sp_who2 and the DMVs is acceptable for this type of duty.

The upside to this is that you do not incur any additional cost for monitoring over what has been allocated for the salary of the DBA. This an easy and quick implementation and requires little knowledge transfer or ability.

The downside here is – well – look at the problems from the last section and then add the glassed over stoner look of the 80s from staring at the monitor all day.

If you have not had the opportunity to use this type of monitoring – consider how lucky you are.  This has been mandated by several companies (yes I have witnessed that mandate).

Pick your Poison

Now we come to a multi-forked path.  Every path at this level leads to a different tool set.  All of these tools bare different costs and different levels of knowledge.

The pro here is that these come with lower risk to those suspicious symptoms from the previous two options. The con is that it will require a little bit more grey matter to configure and implement.

You can do anything you would like at this level so long as it involves automation.  You should configure alerts, you should establish baselines, you should establish some level of history for what has been monitored and discovered. My recommendation here is to know your data and your environment and then to create scripts to cover your bases.

One last thought, no matter what solution you decide to implement, you should also monitor the monitor. If the DBA collapses from long hours of eyeball monitoring, who will be there to pick him/her up to resume the monitoring?

If you opt to not implement any of these options, or if you opt to implement either of the first two options, I hope you have dusted off your resume!

Extended Events and Data Types

Comments: No Comments
Published on: April 14, 2015

TSQL2sDay150x150

Today is another one of those installments in the long-running TSQL Party held monthly (a.k.a TSQL2SDAY).

This month we have an open invitation from Mike Donnelly (blog | twitter), asking us to talk about something new we have learned and then to teach about it. You can read the invitation in Mike’s own words on his blog.

Coincidentally, the topic is both pretty straight forward and easy on the one hand while somewhat difficult on the other hand.  Mike said: “The topic this month is straight forward, but very open ended. You must learn something new and then write a blog post explaining it.” I find the topic to be difficult because I will usually blog about a topic when I have learned something new. On the other hand, sharing new stuff  is pretty straight forward and enjoyable.  Enter the brain split!

So, what I have learned recently?

Quite a bit.  But what would I really like to share on this occasion?

For today, I would like to share more information about extended events.  XEvents are great.  There is a wealth of information to be garnered from XEvents.  Furthermore, XEvents provide a great opportunity to keep learning.

While researching some events to help troubleshoot a specific issue, it dawned on me that there was some info that I had only looked at when I was looking at specific events.  I started wondering how much of that info was out there.  So here I will share some of this information that is available to you via queries within SQL Server.  Much of this info is attainable through the re-purposing of some scripts I shared previously – here.

Custom Data Types

This wasn’t too much of a surprise because I had seen them throughout and taken advantage of the custom data types to get better information.  But I might consider these custom data types to be more of the EAV model coming through than custom data types.  One can expose the custom data types through an evaluation of data in the map_values DMV.  Let’s take a look at a script that would lay the groundwork to see these data types.

Evaluating this data, one will see that in addition to the “standard” datatypes such as integer, there will be a “wait_types” data type.  This data type will map to all of the wait types available through extended events.  Additionally, the event that is associated to each of these custom data types is exposed through this query.  When getting ready to use an extended event, knowing the kinds of data that will be exposed through a data point in the session will make the session data more usable.  Knowing there is a custom data type (yes, it is really just a key value pair), can be extremely helpful.

Collection Flags

Many of the available events have “customizable” collection flags exposed.  Understanding that these collection flags can be on or off is essential to saving some hair.  Not all data is automatically collected for all events.  Some prime examples of such events that do not automatically collect certain pieces of data are sp_statement_completed and object_created.  The nice thing about these flags is that they have a value of “customizable” in the column_type field.  Another good thing with these flags is that the description field gives a little documentation on what the behavior should be for the “on” and “off” states.

There is a good reason that some of those may be off by default.  The addition of this information may cause an additional load or may be information overload.  It is up to the consumer to determine if the data is going to be of significant importance.  Once determined, enable or disable the flag as appropriate.

These queries provide a good amount of information about the extent of custom data types as well as the collection flags that may be available to use when creating event sessions in SQL Server.  Understanding that this data and these options are there is important to capturing better event info.

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

Last Execution of a Proc

SQL Server is full of good stuff.  There are plenty of features to be used.  Plenty of applications to help it.  And there is even plenty of metadata within SQL Server to help you better understand your data and the queries that are being run.

It just so happens that a couple of clients requested some information on this additional metadata.  Both of the clients wanted something just a little different from the other.  After a little bit of thought, it came pretty clearly that what they wanted was definitely available within SQL Server.  The first client simply wanted to know the last time a procedure had been executed.  That is relatively easy enough – granted the procedure has been executed and granted the execution statistics have not been evicted.

The second client also wanted to know some information about the execution of a stored procedure.  But this time, they wanted to get the execution plan.  There are a few ways to trap an execution plan.  You could either run a trace, an XE session, or you could execute the query and grab the plan.  But if you didn’t already have an XE session running or a trace running and the client does not want you to execute the query to trap that plan?

Well, that is not a problem because SQL Server stores this stuff.  As long as the query and plan have not been evicted from the plan cache then you can get the metadata you desire.

Metadata to the Rescue

The metadata that we seek happens to be accessible through the use of several dynamic management views.  These are sometimes called dynamic management objects and are great to use to get to know your data and environment.  This time around, I am mostly interested in the following dynamic management objects: sys.dm_exec_query_statssys.dm_exec_cached_planssys.dm_exec_sql_text , and sys.dm_exec_query_plan.  I am using these objects because I want to trap execution stats along with the query SQL statement as well as the cached plan for that query.

So this is what I came up with to aid each of the clients in getting the data they sought.

Pretty simple and straight forward.  Now, I have this quick query to find the last time a proc was run, as well as a means to trap the execution plan for that query.  If I run that query, I would be able to see something like the following.

Capture

I hope this is useful to you and hope you enjoy.

Trapping Online Index Operations

Categories: News, Professional, SSC
Comments: No Comments
Published on: January 29, 2015

Recently I wrote an article about Capturing Online Index Operations.  In that article, I discussed a problem that I had encountered.  Well, there were multiple problems.  One was an issue with a vendor app that had some hidden module that was performing online index defrags that was causing corruption in a couple of indexes every night.

The second of the issues was not vendor related but more to do with pain I was experiencing with getting the XEvent session I was creating to work properly.  The problem, as I had concluded, was that it would not work to an asynchronous file target.  As it turns out, I finally got it working.

Today, I just want to share that solution.  Being able to store the results to a file target offers a few benefits (less memory consumed and persisted data) that makes it far more ideal for this problem than to use the ring buffer.

For more background on the full issue and a demonstration on the output of the data from this session, please visit the original post on SQL Solutions Group.

And for the query to parse the data that is stored, you can use something like this next query.

I hope you find this useful.  But before you proceed, I do have a caution to throw out there.  This session is dependent on the file system directory of C:\Database\XE to exist.  If it does not, you will get a nasty error message.  Despite the error message, the session will create.  And since the session creates, when you try to run the next query, you will get an additional error because the file does not exist since the path does not exist.

Why does the session still create despite an error you ask?  Well, the answer to that is due to the existence of the ring_buffer target that is also specified.  I left that in there despite not needing it. You can safely remove the ring_buffer target in this XEvent session.  If you remove that ring_buffer target and do not have the directory previously noted, then the session will not create once the error is reached.

Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.

Capture

Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

Effects of Max Mem on Plan Cache

Reading a Microsoft Article (which can be found here) while making sure I understood what can cause a query plan to be recompiled or removed from the plan cache, I got to thinking a bit.

In the article, it lists the following as things that can cause a plan to be removed from cache or to be recompiled.

  • Execution plans remain in the procedure cache as long as there is enough memory to store them. When memory pressure exists, the Database Engine uses a cost-based approach to determine which execution plans to remove from the procedure cache. To make a cost-based decision, the Database Engine increases and decreases a current cost variable for each execution plan.
  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes made to a single procedure, which would drop all plans for that procedure from the cache (ALTER PROCEDURE).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

The first item removes a plan from cache while the rest mark the plan is invalid and a recompile will be forced on the next execution of the SQL Statement.  But then the question comes, why make the distinction between the two?  Doesn’t removing a plan from cache in essence equate to a recompile further down the road when the query is executed again?  While semantically it is not a recompile, it sure sounds a lot like one.  The query was compiled, invalidated and removed from cache and then compiled again when executed the next time around.

When thinking about all of this, how can we see that memory pressure can “invalidate” query plans?  That is actually quite simple.  We can see the same net effects with the following experiment.

WARNING: DO NOT DO THIS ON ANY ENVIRONMENT THAT IS NOT A SANDBOX ENVIRONMENT.

In this setup, I have a Sandbox server with a simulated load that keeps plenty of plans in cache and SQL Server is happy with about eight gigabytes of memory allocated to it.  To show that memory can cause a bunch of plans to be invalidated (and removed from the plan cache), I am going to take memory away from SQL Server and in essence simulate a memory leak or memory pressure.

This shows that the settings have taken effect on SQL Server and all of this without a Service restart (I keep hearing that myth).

Max Memory in SQL Server

Seeing that the memory settings are taking immediate effect, we can turn our attention to that query that was run.  In the query to experiment with this, I took a count of the number of plans in the plan cache first, then changed the max memory, and then took a count of the number of plans in cache again.  This next image shows the results of those two plan count queries.  It is obvious here that the number of plans in cache was severely impacted by this change in Max memory when taking TOO much memory away from SQL Server (this is why you should only do this on a sandbox server).

Max Mem effect on Plan Cache

 

If you happen to take too much memory away from SQL Server, you could end up in a severe memory pressure situation where connections are limited.  Even trying to increase max memory could be severely hampered due to insufficient resources in the default resource pool.  Here is a blog post that demonstrates the effects on connections due to this error (even as bad as SQL Server not starting up).

Since the problem is in the default resource pool, you could try a connection to the DAC and modify the max memory there.  If the pressure is severe enough, you may be limited in your ability to connect to the DAC as well.  In that case, you may be in need of starting up in single user mode as referenced in that prior blog post.  Sometimes, it is enough to reduce the number of connections or resource requirements to regain access to the server to change the max memory appropriately.  Suffice it to say, do not run this on a production type of box, lest you are ready to cause a bit of an outage.  The intent here is solely to share that max memory can have an immediate impact on compiles and plans in cache.

Audit Database File Size Changes

dbsizechange

Recently I shared an article on how to track the growths and shrinks that occur within database files.  I shared that article here, you should read it before proceeding with today’s article.

Near the end of that article I declared that it was a really good method to track that information within SQL Server 2008.  What if you happen to be on SQL 2012 or SQL 2014 (as of this writing)?  Will it work there too?

Does it Work Past 2008?

It is a good question.  Does that extended event session I shared work in SQL Server 2012 or 2014?  Let’s take a quick look.  I am skipping the setup scripts for that XE session in this article, so you will need to get them from the previous article in order to follow along in this segment.

Before we can investigate if this extended event session will work, let’s take a quick look to confirm that the session is running on the server.  We can do that with a query similar to the following.

Running that query will produce results very similar to the following.

SessionCheck

This is good news.  If you noticed, I am querying a couple of views to get this information.  In the server_event_sessions catalog view I can determine if the event exists.  When checking the dm_xe_sessions DMV, I can see if the session is running or not by whether or not the session exists in the view.  When the session is enabled and running, then the DMV will return a record for it. Otherwise, the DMV does not hold a record for the session.

With a session running, we can now validate if it is running properly by running any script that will cause the files to grow or shrink.  Recall that in the previous article, it was shown that any growth or shrink operation will cause an event to fire with this session.  Here is a sample of the query I am running.

That query has three distinct segments.  The first is just to check my file sizes.  The second segment performs my file shrink operations.  And the final segment checks the file sizes again.  Here is what the first and third segments would look like on the Sandbox2 database that I used in the previous article.

filesizes_vold

With the evidence that we have the session running and that there was indeed a file size change, let’s now check the event session data and confirm whether or not the session is working on this SQL 2014 server.

The preceding is the query I am using to query the session data.  Running that query will produce the following results.

nodata

That’s right!  There is no session data despite the event having occurred and despite the session running currently.  We also know that this session works (we demonstrated it on SQL 2008).  So there must be a bug, something is broken.  Right?

What Now?

Since the extended event obviously no longer works, we are stuck with few options.  We could always try resorting back to the default trace.  After all, I demonstrated that the default trace is already trapping information about file shrinks.  That was discussed in the prior article and here as well.

So, what if we tried to go and capture all of the same information from the default trace?  We could certainly try that.  Assuming that the default trace is still running on the server, this query could get us pretty close.

And this does a fairly decent job of getting the info we seek.  Sadly, though, it does not trap all of the necessary information.  Only the DBCC event (event 116) traps the sql statement that triggered the event to be recorded in the default trace.  But for the most part it can be a decent swing at getting the information.  Without the sql statements tied to the event, I’d rather not use it because it really just shows me how many times the size changed, what time the event occurred, and the size of the change.

There has got to be some other way of getting this to work in extended events.  A good question to ask is “Why does the extended event no longer work?”

A little digging, and one might eventually find a document that can shed some light on the problem.  Reading this document, we can see why the event no longer works.  It has been deprecated.  What?  After one release, they decide to take away a critical piece of information?  How can that be?

Time to back up those findings with something a little more authoritative such as this.  Looking at this article, we see that indeed the event was deprecated.  But wait a minute, the event was not just deprecated, it was also replaced with a new event.  We are in business so let’s do some querying within event sessions.

Back in Business

We could have probably spared some time by checking the available events in SQL Server by using this next query.  However, the events used in the previously used event session still exist.  If they did not exist, the session creation would have failed.  This can be a bit misleading, so it is good to have the information from Microsoft that the events have been deprecated and merged into a single event.

This produces the desired results with the new event name specified in that Microsoft article.

2012filesizeevent

Based on this information, a rewrite of the extended event session is possible and necessary.  We can update the extended event session that audits when a database file changes in size.  This will look something like the following session.

And since I happened to have that session also running at the same time as the shrinkfiles that were run previously in this article, I can go ahead and check to see if anything was captured.  To check the session data, I will use the following query.

In this new event for 2012 and beyond, there is different data that is captured.  This means that I have access to better information about what is happening to my database files with regards to the size changes (growths and shrinks).

Conclusion

If you just so happen to be running on SQL Server 2012 or later, you will need to change your event sessions that were tracking file changes.  It is a bit of an exercise to make the change and can be frustrating, but it is well worth it.  The improved data that can be captured is going to help better control and oversee the environment.

Ghosts – an eXtrasensory Experience

ghostrip_fireThis is the last article in a mini-series diving into the existence of ghosts and how to find them within your database.

So far this has been a fun and rewarding dive into Elysium to see and chat with these entities.  We have unearthed some means to be able to see these things manifesting themselves in the previous articles.  You can take a look at the previous articles here.

For this article, I had planned to discuss another undocumented method to look into the ghost records and their existence based on what was said on an msdn blog.  But after a lot of research, testing and finally reaching out to Paul Randal, I determined that won’t work.  So that idea was flushed all the way to Tartarus.

Let it be made very clear that DBTABLE does not offer a means to see the ghosts.  Paul and I agree that the other article that mentioned DBTABLE really should have been referring to DBCC Page instead.

Despite flushing the idea to Tartarus, it was not a fruitless dive.  It just was meaningless for the purpose of showing ghosts via that DBCC command.  I still gained value from the dive!!

All of that said, the remainder of the plan still applies and it should be fun.

Really, at this point what is there that hasn’t been done about the ghosts?  Well, if you are well tuned to these apparitions, you may have received the urge to explore them with Extended Events – sometimes called XE for short.

As has been done in the past, before we board Charon’s boat to cross the River Styx to Hades to find these ghosts in Elysium, one really needs to run the setup outlined here.

With the framework in place, you are now ready to explore with XE.

Look at that! There are several possible events that could help us track these ghosts.  Or at the least we could get to know how these ghosts are handled deep down in the confines of Hades, err I mean the database engine.

Ghost_XE

 

From these possible events, I opted to work with ghost_cleanup and ghost_cleanup_task_process_pages_for_db_packet.  The sessions I defined to trap our ghost tracks are as follows.

You can see there are two sessions defined for this trip down the Styx.  Each session aptly named for our journey.  The first (GhostHunt) is defined to trap ghost_cleanup and sends that information to a histogram target.  The second (SoulSearch) is defined to use the other event, and is configured to send to the ring_buffer.  Since the second event has a “count” field defined as a part of the event, it will work fine to just send it to the ring buffer for later evaluation.

Once I have the traps, I mean event sessions defined, I can now resume the test harness from the delete step as was previously done in previous articles.  The following Delete is what I will use.

Prior to running that delete though, I checked the Event Session data to confirm a starting baseline.  Prior to the delete, I had the following in my histogram target.

 

predelete_count

 

After running the delete, and checking my histogram again, I see the following results.

post_count

 

You can see from this that in addition to the 25 pre-existing ghosts, we had another 672 ghosts (666 of which were from the delete).

This is how I was able to investigate the GhostHunt Extended Event Histogram.

But what about looking at the other event session?

Let’s look at how we can go and investigate that session first and then look at some of the output data.

ghostclean

 

Cool!  Querying the SoulSearch session has produced some information for various ghosts in the database.  Unlike the histogram session that shows how many ghosts have been cleaned, this session shows us some page ids that could contain some ghosts – in the present.  I can take page 1030111 for instance and examine the page with DBCC PAGE as follows.

 

 

pagealtLook at that page and result!! We have found yet another poltergeist.

RIP

Once again we have been able to journey to the depths of the Database engine and explore the ghosts that might be there.  This just happens to illustrate a possible means to investigate those ghosts.  That said, I would not necessarily run these types of event sessions on a persistent basis.  I would only run these sessions if there seems to be an issue with the Ghost cleanup or if you have a strong penchant to learn (on a sandbox server).

Some good information can be learned.  It can also give a little insight into how much data is being deleted on a routine basis from your database.  As a stretch, you could even possibly use something like this to get a handle on knowing the data you support.  Just be cautious with the configuration of the XE and understand that there could be a negative impact on a very busy server.  And certainly proceed at your own risk.

«page 1 of 7






Calendar
May 2015
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031
Content
SQLHelp

SQLHelp


Welcome , today is Saturday, May 30, 2015