Another Query to Help Find the Right Event

Finding the right event or combination of events to monitor may seem like quite a daunting task with so many events to explore and xvnt_crossing(frequently) too little familiarity with Extended Events. In this follow-up article to Extended Event Help Queries, I will share another means to quickly explore the Extended Events metadata in the effort to find the precise event to fit your needs.

Today I will be sharing a new query or two that I have used on more than one occasion to help track down the event(s) that I wanted to, at least, try while troubleshooting specific problems within the SQL Server Instance. I found these queries useful once again in a recent opportunity to help out some friends from the SQL Community. (Maybe, just maybe, these are really just a spin of other previously shared queries.) I will share the most recent experience in another follow-up post.

For the sake of posterity, I am also adding this to the 60 Days of Extended Events Table of Contents.

Is There an Event that Contains pertinent Data?

In my previous article I demonstrated how to find an event based solely on the name or description of the event. This is fantastic if the event name (or description) contains one of the magical words you have used. What if the event name or description has nothing to do with the terms you selected? Or, what if the data you seek may be attached to the event but wouldn’t necessarily stand out as a description for that event (by name or description details for that event)?

Now comes the more difficult task right? If the name or description of the event doesn’t relate to the search terms then you just might overlook a few events and be stuck trying to troubleshoot a problem. An equally big problem this could cause is yet another invisible barrier to using Extended Events. It would be easy to slide down the slippery slope and not transition to Extended Events just because an event, applicable to the problem at hand, could not be found.

This is where searching on the payload comes in handy. Remember from a previous article that the payload is the event data that is attached to each event. Imagine how much more accurate we can become by extending our search from the name and description of the event object to next search on the payload names and descriptions? That is exactly what I will now show.

Basically, I am searching against the object_columns metadata dmv for Extended Events just to find any columns that match my theoretical needs. In this case, I am trying to find anything that may help me find events related to the terms “select” or “statement”. When run, there will be quite a few results for these terms. That may actually be a bit of a problem (just the same as if nothing were returned in the results). There is a fix for that (well probably several fixes as you begin to tweak the query).

One possible fix is to also include map_values to refine the search. Another fix might be to change the search terms. And still other fixes might include tweaking the predicate. For today, I am going to demonstrate the inclusion of map_values as a secondary tweak to help isolate the needed event.

In this iteration, my result-set is trimmed significantly due to the predicate and the requirement for a match based on the map_value name. Ultimately, this trimmed my results down to precisely the events that I needed in this case. With this slightly more advanced version, I have greater visibility into the Extended Event realm to help me find just the right event to match my needs.

Are there other Events related to the new found Event?

 

This is some pretty awesome stuff so far. But it doesn’t end right there. Many events are tied together to help provide a more holistic view of the problem. These events are tied together via keyword. You can read more about keywords here. By searching for events based on a keyword, we can get a lot more intelligent about the sessions we create with Extended Events.

Taking the single result from the last query and then using the SearchKeyword value, for that event, in this next query – I can potentially go from a Great DBA to a Super DBA.

I intentionally included the columns from each of the matching events. Doing this, I can also see what potential payload may be trapped when I attempt to trap any of the events from this result set.

Conclusion

There is great power in Extended Events. There is more power in being able to query the metadata for Extended Events in order to trap the appropriate information to properly troubleshoot a problem.

Have fun with these scripts and I hope you enjoy.

Extended Event Help Queries

Enjoy this article re-publication from my original work at SQL Solutions Group.

When working with Extended Events, there are times when a little more information is, well, helpful.  You know you want to use extended events to try and monitor for a specific thing xvnt_crossingto happen.  But sometimes you don’t know if there is an event for that “thing”, or maybe you don’t know if there is a session already in place to do that specific task (or if it is even running), or maybe you just need better information about the event to see what kind of data it captures when it is fired.

Sometimes, this information is easy enough to find.  Sometimes, it is less than evident how to find the wanted information.  Some of the information may be visible through the GUI (if you are using SQL Server 2012 or later), and sometimes it is just flat out easier to run a quick script.  Today, I would like to share a few quick scripts to help find some of this information that can make the life of the DBA just a bit easier.

Does an Event Exist?

First let’s tackle the problem of discovery.  When we want to use extended events to try and troubleshoot a problem or to capture more information, it is really good to know if such an event exists.  There are many events that capture data for various different things within SQL Server.  More and more events are being added with each release.  More and more data is being made available to the DBA to help perform a better job and to help the DBA better understand what is really happening within the database environment.

In order to determine if there might be an event, that can provide the data for that one “thing” that may be happening within your environment, we could start by querying the SQL Server Internals.  This next query will do just that for us.

This will query the DMVs related to Extended Events and provide us with the event names that contain the terms used in the filter / predicate.  In this case, I am looking for any events that contain the terms “growth” or “file_size”.  This is in an effort to try and find anything that might be related to database files changing sizes.  You may have seen one of the previous articles on tracking file size changes here or here.

The use of the right term to try and trap the right data for the problem that is happening can greatly decrease the time required to find the correct event.  There may be a bit of an art involved in finding the correct search term.  A little practice can help improve the ability to find the appropriate event more quickly.

Is the Data any Good?

Knowing if an Extended Event exists for the desired problem is one thing.  Knowing what data can be parsed from that event is another thing.  Many times, we may not know what kind of data can be trapped with each of the events.  Many times, we may just not be trapping enough data.  And of course, knowing what kind of data can be trapped by the event may help us in filtering that data down to what is usable for efficient troubleshooting.

Finding that data is just a matter of another simple query against the DMVs.  Let’s take a look at the query that can help us find the data we need:

In this case, I have taken the name of the event about which I want to learn more details.  In addition, I only want to retrieve the data columns that retrieve data that could change.  As you can see, I am investigating “trace_flag_changed”.  You may recall that particular event from a previous article and you can read more about it here.

You may also be looking at the query and thinking it seems a little overly complex for what we want.  Well, it is.  Here is an alternate version:

The discovery of the columns available in each event will give us some options to filter as well as some options in what kind of data we parse from the resulting event.

Did I Already Create that Session?

This is arguably less critical than the others.  After all, some will just drop and recreate the session.  But why drop the session if you don’t need to?  On the other hand, a quick glimpse in the GUI could also display that information.  Sadly that doesn’t work for all versions of SQL Server (e.g. SQL Server 2008).  Maybe you have the session running, or maybe it is simply created and not in the running state.

Finding these sessions and whether they are running or not is rather simple.  Just run this next query and it will help you figure out if the session is there or not.  All you need to do is plug in the name of the session and away you go.

This query has been confirmed to work in SQL 2008 and up through SQL 2014.  It should simplify some of the attempts to better understand what is happening with any sessions that might be out there.

These are a few quick and simple queries to help you along your way to becoming better acquainted with Extended Events.  I hope these will be useful to you and that you will get to know and use extended events regularly.

SQL Server CTE – Back to Basics

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 14, 2016

Quick Flashback

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

SQL Server CTE

kapowOf all the fundamental concepts within SQL Server, this one drives me a bit batty from time to time. Think about that statement for just a moment. I am not going to go in depth about the syntax of a CTE.

I am not really interested in breaking down the semantics and uses of the CTE. Rather, I want to break down something commonly associated to the CTE that is far more basic. I would classify this as more of a myth and misconception than anything else.

The topic du jour happens to be around punctuation. More specifically, do we use terminators or do we use beginninators? Yes that is a made-up term, but you get the drift. Where exactly should the punctuation be placed? Better yet, is it mandatory to include that punctuation every single time?

Those are the items I wish to discuss and hopefully demystify with a few examples (if that many are even necessary).

Punctuation

Punctuation shouldn’t be too big of a deal right? There is so little punctuation within t-sql that it really should be very basic and easy to understand. That said, there is this perpetual myth that a semi-colon is required at the beginning of a CTE. Furthermore, and somewhat more accurate, is that a semi-colon is a requirement in order for a CTE to work. Let’s take a look at both of those scenarios.

Case the First

Let’s just take a look at the most basic of scripts. A simple select statement without a CTE.

Straight away, we can see that I am accessing the AdventureWorks2014 Database. This happens to be on my 2014 instance as well. I have not terminated the select statement with a semi-colon. It really is somewhat a preference here since the statement will work with or without the semi-colon just the same (and despite the previous notion that it was going to become mandatory to terminate all statements with a semi-colon, don’t believe it until it actually happens).

Here is the SQL Server 2016 version:

Now let’s go with a basic CTE example written the same way that we commonly see CTEs written on the internet.

Notice that I am using the semi-colon as a statement beginninator and terminator all in the same statement. The statement runs perfectly fine – that much is true. The usual reason for the leading semi-colon is that the cte has to be preceded by a semi-colon. Let me see how true that is with a minor tweak!

What do you expect to happen from the execution of this statement? Well, as it stands the statement will execute without error just the same as it did in the previous example.

basicctetweaked

What? That worked? Yes. Yes it did. This is quite frankly due to the reality that a semi-colon is not required to precede a CTE. This statement was the only statement in the batch.

Case the Second

Based on those results, that must mean that a semi-colon is absolutely required when there is a statement preceding the CTE. Therefore, what is really required is the statement that immediately precedes the CTE must be terminated by a semi-colon. Not so fast there. Let’s test this one too!

Let’s use the following basic example.

That query works perfectly fine again. That must be due to the preceding semi-colon, right? Hold on to that thought. How about a rewrite as follows:

Here we tweaked the query to represent the notion that all previous statements should be terminated with a semi-colon. What do you think will happen? This will result in an error. GO is a batch terminator and therefore does not require the semi-colon. Surely this reinforces the theory that the semi-colon should come preceding to the CTE though, right? Let’s rewrite that query again.

I removed the semi-colon and now the code segment will execute without error. Wait there is more!

I want to continue tweaking this query. Let’s try multiple selects along with this CTE.

mind_blownNow I have a query that has a preceding select that is not terminated with a semi-colon. I have a CTE that is created right after that select and then I select from that CTE. What could ever possibly happen with this particular query?

The short of that story is that the query will execute without error and I will end up with two result sets.

Wait! Hold on two seconds there! Surely the semi-colon is an absolute requirement because we see it everywhere that it is a mandatory requirement.

The reality is that the semi-colon requirement is not really entirely accurate. If the CTE happens to be in the same batch, then the previous statement in the batch must be terminated by the semi-colon. Take the following change for example:

If I remove the semi-colon from after the “END” statement, then my query will terminate in an error. If I remove the BEGIN and END from around the first select statement, then the query will fail with an error. The error that I receive is part of the reason for the huge misconception about when a semi-colon is required:

Msg 319, Level 15, State 1, Line 38
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

As I have already shown though, this is not accurate. Start thinking on a bigger scale than just semi-colons. Personally, I would prefer to see CTEs that are used in examples on the internet to be encapsulated with a BEGIN…END instead of using a preceding semi-colon (beginninator).

Has this behavior changed with SQL Server 2016? That is easily shown by the following example:

Recap

These examples clearly demonstrate the inaccuracy of the myth that a semi-colon is required. Despite the statement made in the error message, it just is not entirely the full story. A semi-colon to terminate the preceding statement within a batch is necessary but it is not required every time. If there is a separation between batches and the CTE, then a semi-colon is not required. And never is a semi-colon required to precede a CTE. It is required to terminate a previous statement within the same batch.

Please don’t be led astray by some notions on the web that dictate a CTE must be preceded by a semi-colon.

SQL Server Desired Enhancements

TSQL2sDayHappy Belated Birthday

The monthly Data Professionals blog party has come and gone. It happens the second Tuesday of every month – or at least is supposed to happen on that day. This month, the formidable Chris Yates (blog | twitter) has invited everybody to a birthday party – of sorts. As with many birthdays, there is always somebody that wishes you a happy belated birthday. For this party, it is my turn to offer up that belated birthday. It just so happens there was some coordination between Chris and myself for this belated birthday.

Read all about the invite from Chris’ blog. If you missed the link, here it is again – right here.

Plastic Surgery – Desired Enhancements

SQL Server 2016 has come with a ton of cool features, bells, whistles and well cool stuff (yes redundant). That aside, what are some of the really cool features that I would love to see in SQL Server? Let’s run through them (And yes, I will be a bit greedy. It is standard operating procedure when asking for gifts, right?).

Gift #1

I need some way of being able to reproduce a production database cleanly and efficiently in a different environment. Sure, I can script everything and develop an elaborate process to ensure I got an exact duplicate of the stats, stat steps, stats histogram, schema, procedures, indexes, etc etc etc. Being able to do all of that cleanly and efficiently is the key. This is a pretty big want from clients and could be extremely useful.

Microsoft has heard the pleas. Introduced with SP2 for SQL Server 2014 there is a new DBCC statement to do exactly that – DBCC CloneDatabase. Check out all the details here.

Gift #1, let’s check that off the list.

Gift #2

Instant File Initialization is fantastic and a huge time saver. Unfortunately this only applies to the data files. We need something like this implemented for the transaction log. Currently the transaction log must be “zeroed” or 0-stamped when new space is allocated. This mechanism can delay transactions and impact performance if there happens to be a required file growth or even when trying to manually grow the file or even restore the database.

Believe it or not, Microsoft has addressed this request as well. Microsoft has changed how the transaction log is stamped for a significant performance improvement. This is a part of SQL Server 2016. Bob Dorr explains it very well in his blog post on the topic. You can read his blog post here.

Wow, two for two. We can check gift #2 off the list.

Gift #3

Availability Groups seems to get bogged down under heavy load. The redo and log send seem to get backed up and can have a significant impact on production operations. We need the log transport to be faster.  No, check that. Not just faster it needs to be 2-3x faster.

SQL Server 2016 comes to the rescue again. Amped up on SQL Steroids, Availability Groups has seen a significant improvement in log transport speeds to the secondaries. Some report it as at least twice as fast. The bottleneck has been moved out of the SQL Engine and it has really amped things up from a performance perspective. Here is a supporting article by Jimmy May on the topic – though it doesn’t go deep into the specifics.

Mark another one off the gift registry. Think we can maintain this pace?

Gift #4

Statistics seem to become stale for smaller tables which dramatically affects performance of certain queries. These tables will not see 20% of the rows updated in the leading edge any time before the turn of the year but they would likely change within the six months following the turn of the new year. We need to be able to force these stats to auto-update more regularly without extra intervention.

Fair enough, we already have a trace flag that can help with that (TF 2371). Maybe the environment or management is resistant to having trace flags implemented for something such as this. You never know what the political red tape may dictate.

stackeddeckSQL Server 2016 to the rescue again!!! SQL Server 2016 has this trace flag enabled automatically. You don’t need to do anything extra special. What this means is that those stats on the smaller tables may actually get updated without intervention despite the lack of change to the rows in the table.
That is four for four. Should we take this birthday party to Vegas? Don’t assume I have stacked the deck either! ;0)

Gift #5

I am getting very frustrated with the constant clearing of usage stats every time I rebuild an index. Just because I rebuild the index, it does not mean that I no longer need the usage stats from prior to the index rebuild. I need to be able to see the usage stats for a the time spanning before and after the rebuild without creating a custom process to capture that information. Sure it may not be an insanely difficult task to perform, but it is extra process I have to build out. It’s the principle of the matter.

SQL Server 2016 to the rescue yet again. This age old bug of usage stats being cleared is finally fixed. It is frustrating to say the least to have to deal with this kind of bug. It is a huge relief to have it fixed and be able to get a consistent clear picture of the usage information since the server has been up.

For more information, you could read this article by Kendra Little – here.

Cha-ching. We are now five for five.

resourceflowGift #6

Digging a little deeper on this one. I would really love to see an enhancement to Resource Governor. Not just any enhancement will do. I need it to be enhanced so it will also affect the reporting services engine and the integration services engine in addition to the database engine. I want to be able to use RG to prevent certain reports from over consuming resources within the SSRS engine. Or for that matter, I want to make sure certain SSIS packages do not consume too much memory. If I can implement constraints on resources for these two engines it would be a huge improvement.

We will have to wait for a while on this one. It is currently not scheduled for delivery

Gift #7

This one is going to be a little tougher. It’s not in place. It would be a fantastic gift in my opinion. I would like some tool such as Extended Events to be able to monitor the workload and determine best recommended trace flags to implement.There are many trace flags that reaper_rip_tombstoneare far from well known but could be extremely helpful to production environments based on the workload and internal workflow. Not all would trace flags are built for all environments. An analysis through some automated tool for best recommended flags to implement (again solely at your discretion) would be fantastic.

Gift #8

Get Profiler out of Management Studio finally. Enough said there. There really is no good solid reason in my opinion to keep it around. It is deprecated. It is hardly helpful with 2014 or 2016 and it is just dead weight. Extended Events really is the better way to go here.

Last Request

 

Can we please fix the spelling of JSON? It really needs to be spelled correctly. That spelling is: JASON.

Awesome SQL Server Feature

TSQL2sDayThe second Tuesday of April 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 77th edition of this monthly blog party. This month the host is Jens Vestergaard (blog | twitter) and he insists we do a little soul searching to figure out what about SQL Server really makes our hearts go pitter patter for SQL Server. Ok, so he didn’t really put it that way but you get the point, right? What is it about SQL Server that ROCKS in your opinion?

Well, I think there are a lot of really cool features in SQL Server that ROCK! It really is hard to pick just one feature because there are a lot of really good features that can make life so much easier as a database professional. Then again, there is that topic that bubbles to the top in my articles – a lot. If you haven’t followed my blog, here is a quick clue: click here.

Why is this feature so AWESOME?

Truth be told, there are a ton of reasons why I really like it. Before diving into the why, I need to share an experience.

A client using Microsoft Dynamics AX to manage the Point of Sale (POS) systems for their retail chain has been running into a problem with the POS database at each store. Approximately a year ago, this client had upgrade most of the store databases to SQL Server Standard Edition from Express due to the size restriction of the Express Edition. This SKU upgrade was necessary because the database had grown to exceed 10GB. Most of this growth was explicitly related to the INVENTDIM table consuming 3.5GB of space in the data file.

Right here, you may be asking what the big deal is. Just upgrade the SKU to Standard Edition and don’t worry about the size of the database. I mean, that is an easy fix, right? Sure, that may be perfectly acceptable in an environment with one or maybe even a handful of servers. Imagine a retail chain with more than 120 stores and a database at each store. No extrapolate standard edition licensing costs for all of those stores. Suddenly we are talking a pretty big expense to just upgrade. All of that just because one table chews up 35% of the size limitation of a data file in SQL Server Express Edition.

What if there was an alternative with SQL Express to mitigate that cost and maintain the POS functionality? Enter the SYNONYM! You may recall from a previous post a thing or two that I have said about synonyms in SQL Server. There is good and bad to be had with this feature and most of the bad comes from implementation and not the feature itself.

Using a synonym, I can extend this database beyond the 10GB limitation – or at least that is the proposed theory. To make this work properly, the plan was to create a new database, copy the INVENTDIM table from the POS database to this new database, rename the old INVENTDIM table in the POS database, create a synonym referencing the new table in the new database, and then select from the table to confirm functionality. Sounds easy right? Here is the script that basically goes with that set of steps.

This seems to make a fair amount of sense. Querying the INVENTDIM synonym produces results just as would be expected. Notice that there is one additional step in the script which I did not mention. That step removes unnecessary rows from the INVENTDIM table based on an actual inventory item or barcode for the particular dimension variant related to the item. This helps to trim the table to specific rows related to the retail store available for purchase there. In addition, it serves as a failsafe to get the data down to less than 10GB in case of failure with the synonym.

failedTesting from within SQL Server proved very optimistic. The synonym was working exactly as desired. Next up was to test the change by performing various transactions through the POS.

The solution not only failed, it failed consistently and dramatically. It didn’t even come close. How is this possible? What is Dynamics AX doing that could possibly subvert the synonym implementation? Time to start troubleshooting.

I checked through the logs. Nothing to be found. I checked and validated permissions. No Dice! I checked the ownership chaining. Still no dice! What in the world is causing this failure?

What if I switch to use a view instead of a synonym? I created a view with cross database ownership chains in tact. Test the application again and still failed. What if I use the synonym pointed to a table in the same database? Test from the application and all of a sudden we have success. Now the head-scratching gets a little more intense.

xe_superheroIt is time to get serious. What exactly is the Dynamics AX POS application doing that is leading to failure that does not happen when we query direct from within Management Studio? The means to get serious is to now implement that awesome tool I alluded to previously – Extended Events (XE or XEvents).

With no clues being available from any of the usual sources (including application error messages), XE or profiler is about the only thing left to try and capture the root cause of this failure. Since this happens to be a SQL Server 2014 implementation (yeah I omitted that fact), the only real option in my opinion was to use XE. Truth be told, even on SQL Server 2008 R2, my go to tool is XE. In this case, here is what I configured to try and catch the problem:

With the session running, I had the POS tests begin again. Bang! It failed again, but I expected it and wanted it to fail again. This time around, finding the problem turned out to be really easy. As soon as the error hit, I was able to check the trapped events and see what it was that had been missing and ultimately causing this string of failures.

xe_trappederror_ax

Using the GUI (yeah rare occasion for me with XE), I filtered the events down for display purposes only to make it easier to see what was found by running these tests that was pertinent to the problem. Here is the highlighted text a little larger and easier to see:

Snapshot isolation transaction failed accessing database ‘AxRetailDIM’ because snapshot isolation is not allowed in this database. Use ALTER DATABASE to allow snapshot isolation.

Wow! Light bulb shines bright and the clue finally clicks. The POS databases for this client are all set to allow snapshot isolation. Since this error is coming at the time when the failure occurs in the application, it stands to reason that this is the root cause. Time to test by changing the snapshot isolation setting.

That is a quick change and easy enough to test again. With the XE Session still running, and the change in effect, it’s time to test via the POS application again. To my expectations the application is working now. This is good news! Time to test again and again and again to make sure it wasn’t a fluke that it worked and that it was only going to work just the once.

Not a single failure after the change to allow snapshot isolation. One small change with such a big impact and so few clues to be found except in that super Awesome Super Hero feature of SQL Server called Extended Events!

Being able to quickly find the root cause of so much pain is why I enjoy working with the Extended Events feature. It is an efficient way to find a ton of information while causing little overhead to the server.

The bonus here is that XE allowed us to pinpoint a problem with the proposed solution to help save costs while extending a database beyond the 10GB limitation of SQL Express.

Note: I left some notes in the XE session script. These notes help to point out differences between implementing this particular session on SQL Server 2012 (or later) and SQL Server 2008 (or R2).

«page 1 of 104






Calendar
July 2016
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, July 24, 2016