T-SQL Tuesday #081: Sharpen Something

Comments: No Comments
Published on: July 27, 2016

Sharpen Something

It has now been 30 months since the last time I hosted a TSQL Tuesday, that was TSQL Tuesday 51. I recapped that event here with the original invite here. I can’t believe it has been that long since I last hosted. It only seems like yesterday.

sqlskillsharpener_pigComing into the present day, we are now at TSQL Tuesday 81. For this month, I would like to try and up the ante a bit. Usually we only get about a weeks notice prior to the event to think about the article to write for the event.

This time, I want to invite everybody just a little bit sooner and will follow-up with a reminder seven days prior to the event. The reason I want to do this is because I think this may be a touch more difficult this time.

 

This month I am asking you to not only write a post but to do a little homework – first. In other words, plan to do something, carry out that plan, and then write about the experience. There is a lot going into that last sentence. Because of that, let me try to explain through a few examples of what I might like to see. Hopefully these examples will help you understand the intent and how this month the topic relates to “Sharpening Something“.

EXAMPLES

  1. You have learned about a really cool feature called Azure DevTest Lab. Having heard about it, you wish to implement this feature to solve some need in your personal development or corporate environment. Develop a plan to implement the feature and tell us the problem it solves and about your experiences in getting it to work from start to end. An example of how I might try to use this might involve the creation of a disposable and easy setup environment for Precons, Workshops, and various other types of training.
  2. There is a really awesome book about SQL Server you heard about and you decided to buy it. Plan to sit down and read the book. Take a nugget or two from the book and tell us how you can use that nugget of information within your personal or professional environment.
  3. You know you are extremely deficient at a certain SQL Skill. Tell me what that skill is and develop a plan to get better at that skill. Report on the implementation of this skill and how you are doing at improving. Maybe that skill is about Extended Events, PoSH or availability groups.
  4. Similar to the skill deficiency, you know you do not understand a certain concept within SQL Server as well as you feel you should. Maybe that concept is indexing or statistics (for example). Create a two week plan to become more proficient at that concept. Follow that plan and report on your progress.

In recap, this is an invite to make a short term goal covering the next two weeks. Tell everybody what that goal is (in your tsql tuesday post of course) and how you went about creating a plan for that goal and how you have progressed during the two week interval.

What is T-SQL Tuesday?

TSQL2sDay150x150T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Adam Machanic on his blog.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday, Août 9e, 2016, and 00:00 GMT Wednesday Août 10e, 2016.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.

You Deserve to be an MVP

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 25, 2016

I have been sitting on this article for a while now. I have been tossing around some Microsoft_MVP_logo_thumb
thoughts and finally it is time to share some of those thoughts with the masses. I hope to provoke further thought on the topic of being an MVP.

I want to preface all of these thoughts first by saying that I believe there are many great people out there who are not an MVP who deserve to be an MVP. These are the types of people that do a lot for the community and strive to bring training and increased knowledge to more people in various platforms under the Microsoft banner.

Now for some obligatory information. While it is true I am an MVP, I feel obligated to remind people that I have zero (yup that is a big fat zero) influence over the MVP program. I am extremely grateful for the opportunity to retain the position of MVP along with all of the rest of the MVP community (there are a few of us out there). Not only am I grateful to the program for allowing me in, I am also grateful to all of those that nominated me.

Work – and lots of it!

mvp_banner

One of the first things that strikes me is the nomination process for the MVP program. There are two parts to the process. The easy part comes from the person making the nomination. That said, if you are nominating somebody or if you are asking somebody to nominate you, read this guide from Jen Stirrup. Jen has listed a bunch of work that has to be done on the part of nominator. Or is it work for the person making the nomination?

When you really start thinking about it, the nominee is really the person that needs to do a fair amount of work. Yes, it is a good amount of work to do. Then again, maybe it is not very much work for you at all.

One of the things that really bugs me about the process is all of this work. Not specifically that I get the opportunity to do it. No, more specifically that there seems to be a growing trend in the community of entitlement. I feel that far too many people, that do a lot within the community, feel they are entitled to being accepted into the MVP program. And of course there are others that do much less and also exhibit the same sentiment.

Entitled?

When you feel you deserve to be an MVP, are you prepared to do more work? I have heard from more than one source that they will not fill out all the extra information requested when they are nominated. The prevailing reason here being that they are entitled, because they do some bit of community work, to be automatically included. Another prevailing sentiment, around this extra work, is that Microsoft should already be tracking the individual and know everything there is to know about the contributions of said individual.

These sentiments couldn’t be further from the fact. If you are thinking along the lines of either of these sentiments, you are NOT an MVP. There are a ton of professionals in the world doing a lot of community activities who are just as deserving of becoming an MVP. It long_resumeis hardly plausible for Microsoft to track every candidate in the world. Why not tell them a bit about yourself?

RESUME / CV

When applying for a job, how do you go about applying for that job? Every job for which I have ever applied, I have needed to fill out an application as well as send a resume to the employer. I hardly think any employer would hire me without knowing that I am interested in the job.

That sounds fantastic for a job right? Being an MVP surely has no need to send a resume for that, is there? Well, technically no. However, if you treat your community work like you would treat any other experience you have, you may start to see the need for the resume just a touch more. When nominated, you are requested to provide a lot of information to Microsoft that essentially builds your resume to be reviewed for the MVP program.

One of the prevailing sentiments I have heard from more than one place is that filling out all of this information is just bragging on yourself. That sentiment is not too far from reality. Just like any resume, you have to highlight your experiences, your accomplishments and your skills. Without this kind of information, how could Microsoft possibly know anything about you? Do you have the paparazzi following you and sending the information along to Microsoft for you? If you do, then why even bother with the MVP program? Your popularity is probably on a bigger scale than the MVP program if you have your own paparazzi.

Invest in your Professional Self

resume_wordcloudThe more effort you put into your candidate details the better chance you have at standing out within the review process. Think about it this way, would you turn in a piece of paper with just your name on it for a job? Or…would you take hours to invest in your personal self and produce a good resume that will stand out in the sea of resumes that have been submitted?

If you ask me to submit you as an MVP and I do, I would hope that you complete your MVP resume (candidate profile) and submit it to Microsoft. If you don’t take the time to do that, then I would find it hard to ever submit you again. The refusal to fill out that information speaks volumes to me and says either you are not interested or think too much of yourself for the MVP program.

Leadership

One of the attributes of an MVP is that of leadership. A simple measure of leadership actually falls into the previous two sections we just covered. If you are contributing to the community, that would be one small form of leadership. If you are willing to follow, that is also a form of leadership. If you are able to complete your information and submit it, then that is also an attribute of leadership.

Leaders demonstrate their leadership by being able to take direction, teaching others (community work), completing tasks when necessary, and reporting back up to their superiors on successes and failures (the last two can be attached to the completion of the nomination data).

Don’t believe me about leadership being an attribute of an MVP? Take a gander at this snippet from my last renewal letter. Highlighted in red is the pertinent sentence.

MVPrenew15_leader

You can run the phrase through a translator or take my word for it that it pertains to exceptional leaders in the technical community.

It’s not a Job though

I am sure some of the pundits out there would be clamoring that if the MVP program were an actual job, then they would perform all of the extra work. I have two observations for this: 1) it speaks to the persons character and 2) MVP really is more like a job than you may think.

The MVP program is not a paid job and probably falls more into the realm of volunteering back2workthan a paid job. Despite that, if you treat it more like a job with full on responsibilities you will have greater success in getting accepted and you will have a greater sense of fulfillment. Additionally, you will get further along with more opportunities within the MVP program just like a traditional job.

Just like a traditional job, there are responsibilities, non-disclosures, internal communications, and annual reviews. Did any of those terms raise your eyebrow? The community contribution paperwork does not end with becoming an MVP – that is just the job application / resume. Every year, you have to provide an annual review. This review is a recap of the entire year with your personal accomplishments and is basically a self-review that would be provided to the manager. I am sure you are familiar with the process of providing a self-review to document reasons why you should remain employed or even get a raise.

Non-traditional Job

As with a regular job, you must continue to accomplish something in order to maintain the position. The accomplishments can come in any form of community contribution such as blogs, speaking, mentoring, or podcasts (as examples). What is not often realized is that this takes time. Sometimes, it takes a lot of time. When you consider the time as a part of your effort, I hope you start to realize that being an MVP really is a lot like a part time job (and a full time job in some cases).

When we start talking about being an MVP in quantity of hours contributed and tasks accomplished, it is not hard to see it as a job. So if it really is just like a job, how much time are you willing to invest in the documentation for this award? Is it at least comparable to the time you invest in documenting your professional career when applying for a paying job? If you don’t take that kind of pride or effort in documenting your worth to your personal career development, then I dare say you need to rethink your approach and re-evaluate whether you should be an MVP candidate.

Being an MVP is not just an award – it is a commitment to do more for the community!

Another Query to Help Find the Right Event

Finding the right event or combination of events to monitor may seem like quite a daunting task with so many events to explore and xvnt_crossing(frequently) too little familiarity with Extended Events. In this follow-up article to Extended Event Help Queries, I will share another means to quickly explore the Extended Events metadata in the effort to find the precise event to fit your needs.

Today I will be sharing a new query or two that I have used on more than one occasion to help track down the event(s) that I wanted to, at least, try while troubleshooting specific problems within the SQL Server Instance. I found these queries useful once again in a recent opportunity to help out some friends from the SQL Community. (Maybe, just maybe, these are really just a spin of other previously shared queries.) I will share the most recent experience in another follow-up post.

For the sake of posterity, I am also adding this to the 60 Days of Extended Events Table of Contents.

Is There an Event that Contains pertinent Data?

In my previous article I demonstrated how to find an event based solely on the name or description of the event. This is fantastic if the event name (or description) contains one of the magical words you have used. What if the event name or description has nothing to do with the terms you selected? Or, what if the data you seek may be attached to the event but wouldn’t necessarily stand out as a description for that event (by name or description details for that event)?

Now comes the more difficult task right? If the name or description of the event doesn’t relate to the search terms then you just might overlook a few events and be stuck trying to troubleshoot a problem. An equally big problem this could cause is yet another invisible barrier to using Extended Events. It would be easy to slide down the slippery slope and not transition to Extended Events just because an event, applicable to the problem at hand, could not be found.

This is where searching on the payload comes in handy. Remember from a previous article that the payload is the event data that is attached to each event. Imagine how much more accurate we can become by extending our search from the name and description of the event object to next search on the payload names and descriptions? That is exactly what I will now show.

Basically, I am searching against the object_columns metadata dmv for Extended Events just to find any columns that match my theoretical needs. In this case, I am trying to find anything that may help me find events related to the terms “select” or “statement”. When run, there will be quite a few results for these terms. That may actually be a bit of a problem (just the same as if nothing were returned in the results). There is a fix for that (well probably several fixes as you begin to tweak the query).

One possible fix is to also include map_values to refine the search. Another fix might be to change the search terms. And still other fixes might include tweaking the predicate. For today, I am going to demonstrate the inclusion of map_values as a secondary tweak to help isolate the needed event.

In this iteration, my result-set is trimmed significantly due to the predicate and the requirement for a match based on the map_value name. Ultimately, this trimmed my results down to precisely the events that I needed in this case. With this slightly more advanced version, I have greater visibility into the Extended Event realm to help me find just the right event to match my needs.

Are there other Events related to the new found Event?

 

This is some pretty awesome stuff so far. But it doesn’t end right there. Many events are tied together to help provide a more holistic view of the problem. These events are tied together via keyword. You can read more about keywords here. By searching for events based on a keyword, we can get a lot more intelligent about the sessions we create with Extended Events.

Taking the single result from the last query and then using the SearchKeyword value, for that event, in this next query – I can potentially go from a Great DBA to a Super DBA.

I intentionally included the columns from each of the matching events. Doing this, I can also see what potential payload may be trapped when I attempt to trap any of the events from this result set.

Conclusion

There is great power in Extended Events. There is more power in being able to query the metadata for Extended Events in order to trap the appropriate information to properly troubleshoot a problem.

Have fun with these scripts and I hope you enjoy.

Extended Event Help Queries

Enjoy this article re-publication from my original work at SQL Solutions Group.

When working with Extended Events, there are times when a little more information is, well, helpful.  You know you want to use extended events to try and monitor for a specific thing xvnt_crossingto happen.  But sometimes you don’t know if there is an event for that “thing”, or maybe you don’t know if there is a session already in place to do that specific task (or if it is even running), or maybe you just need better information about the event to see what kind of data it captures when it is fired.

Sometimes, this information is easy enough to find.  Sometimes, it is less than evident how to find the wanted information.  Some of the information may be visible through the GUI (if you are using SQL Server 2012 or later), and sometimes it is just flat out easier to run a quick script.  Today, I would like to share a few quick scripts to help find some of this information that can make the life of the DBA just a bit easier.

Does an Event Exist?

First let’s tackle the problem of discovery.  When we want to use extended events to try and troubleshoot a problem or to capture more information, it is really good to know if such an event exists.  There are many events that capture data for various different things within SQL Server.  More and more events are being added with each release.  More and more data is being made available to the DBA to help perform a better job and to help the DBA better understand what is really happening within the database environment.

In order to determine if there might be an event, that can provide the data for that one “thing” that may be happening within your environment, we could start by querying the SQL Server Internals.  This next query will do just that for us.

This will query the DMVs related to Extended Events and provide us with the event names that contain the terms used in the filter / predicate.  In this case, I am looking for any events that contain the terms “growth” or “file_size”.  This is in an effort to try and find anything that might be related to database files changing sizes.  You may have seen one of the previous articles on tracking file size changes here or here.

The use of the right term to try and trap the right data for the problem that is happening can greatly decrease the time required to find the correct event.  There may be a bit of an art involved in finding the correct search term.  A little practice can help improve the ability to find the appropriate event more quickly.

Is the Data any Good?

Knowing if an Extended Event exists for the desired problem is one thing.  Knowing what data can be parsed from that event is another thing.  Many times, we may not know what kind of data can be trapped with each of the events.  Many times, we may just not be trapping enough data.  And of course, knowing what kind of data can be trapped by the event may help us in filtering that data down to what is usable for efficient troubleshooting.

Finding that data is just a matter of another simple query against the DMVs.  Let’s take a look at the query that can help us find the data we need:

In this case, I have taken the name of the event about which I want to learn more details.  In addition, I only want to retrieve the data columns that retrieve data that could change.  As you can see, I am investigating “trace_flag_changed”.  You may recall that particular event from a previous article and you can read more about it here.

You may also be looking at the query and thinking it seems a little overly complex for what we want.  Well, it is.  Here is an alternate version:

The discovery of the columns available in each event will give us some options to filter as well as some options in what kind of data we parse from the resulting event.

Did I Already Create that Session?

This is arguably less critical than the others.  After all, some will just drop and recreate the session.  But why drop the session if you don’t need to?  On the other hand, a quick glimpse in the GUI could also display that information.  Sadly that doesn’t work for all versions of SQL Server (e.g. SQL Server 2008).  Maybe you have the session running, or maybe it is simply created and not in the running state.

Finding these sessions and whether they are running or not is rather simple.  Just run this next query and it will help you figure out if the session is there or not.  All you need to do is plug in the name of the session and away you go.

This query has been confirmed to work in SQL 2008 and up through SQL 2014.  It should simplify some of the attempts to better understand what is happening with any sessions that might be out there.

These are a few quick and simple queries to help you along your way to becoming better acquainted with Extended Events.  I hope these will be useful to you and that you will get to know and use extended events regularly.

SQL Server CTE – Back to Basics

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 14, 2016

Quick Flashback

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

SQL Server CTE

kapowOf all the fundamental concepts within SQL Server, this one drives me a bit batty from time to time. Think about that statement for just a moment. I am not going to go in depth about the syntax of a CTE.

I am not really interested in breaking down the semantics and uses of the CTE. Rather, I want to break down something commonly associated to the CTE that is far more basic. I would classify this as more of a myth and misconception than anything else.

The topic du jour happens to be around punctuation. More specifically, do we use terminators or do we use beginninators? Yes that is a made-up term, but you get the drift. Where exactly should the punctuation be placed? Better yet, is it mandatory to include that punctuation every single time?

Those are the items I wish to discuss and hopefully demystify with a few examples (if that many are even necessary).

Punctuation

Punctuation shouldn’t be too big of a deal right? There is so little punctuation within t-sql that it really should be very basic and easy to understand. That said, there is this perpetual myth that a semi-colon is required at the beginning of a CTE. Furthermore, and somewhat more accurate, is that a semi-colon is a requirement in order for a CTE to work. Let’s take a look at both of those scenarios.

Case the First

Let’s just take a look at the most basic of scripts. A simple select statement without a CTE.

Straight away, we can see that I am accessing the AdventureWorks2014 Database. This happens to be on my 2014 instance as well. I have not terminated the select statement with a semi-colon. It really is somewhat a preference here since the statement will work with or without the semi-colon just the same (and despite the previous notion that it was going to become mandatory to terminate all statements with a semi-colon, don’t believe it until it actually happens).

Here is the SQL Server 2016 version:

Now let’s go with a basic CTE example written the same way that we commonly see CTEs written on the internet.

Notice that I am using the semi-colon as a statement beginninator and terminator all in the same statement. The statement runs perfectly fine – that much is true. The usual reason for the leading semi-colon is that the cte has to be preceded by a semi-colon. Let me see how true that is with a minor tweak!

What do you expect to happen from the execution of this statement? Well, as it stands the statement will execute without error just the same as it did in the previous example.

basicctetweaked

What? That worked? Yes. Yes it did. This is quite frankly due to the reality that a semi-colon is not required to precede a CTE. This statement was the only statement in the batch.

Case the Second

Based on those results, that must mean that a semi-colon is absolutely required when there is a statement preceding the CTE. Therefore, what is really required is the statement that immediately precedes the CTE must be terminated by a semi-colon. Not so fast there. Let’s test this one too!

Let’s use the following basic example.

That query works perfectly fine again. That must be due to the preceding semi-colon, right? Hold on to that thought. How about a rewrite as follows:

Here we tweaked the query to represent the notion that all previous statements should be terminated with a semi-colon. What do you think will happen? This will result in an error. GO is a batch terminator and therefore does not require the semi-colon. Surely this reinforces the theory that the semi-colon should come preceding to the CTE though, right? Let’s rewrite that query again.

I removed the semi-colon and now the code segment will execute without error. Wait there is more!

I want to continue tweaking this query. Let’s try multiple selects along with this CTE.

mind_blownNow I have a query that has a preceding select that is not terminated with a semi-colon. I have a CTE that is created right after that select and then I select from that CTE. What could ever possibly happen with this particular query?

The short of that story is that the query will execute without error and I will end up with two result sets.

Wait! Hold on two seconds there! Surely the semi-colon is an absolute requirement because we see it everywhere that it is a mandatory requirement.

The reality is that the semi-colon requirement is not really entirely accurate. If the CTE happens to be in the same batch, then the previous statement in the batch must be terminated by the semi-colon. Take the following change for example:

If I remove the semi-colon from after the “END” statement, then my query will terminate in an error. If I remove the BEGIN and END from around the first select statement, then the query will fail with an error. The error that I receive is part of the reason for the huge misconception about when a semi-colon is required:

Msg 319, Level 15, State 1, Line 38
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

As I have already shown though, this is not accurate. Start thinking on a bigger scale than just semi-colons. Personally, I would prefer to see CTEs that are used in examples on the internet to be encapsulated with a BEGIN…END instead of using a preceding semi-colon (beginninator).

Has this behavior changed with SQL Server 2016? That is easily shown by the following example:

Recap

These examples clearly demonstrate the inaccuracy of the myth that a semi-colon is required. Despite the statement made in the error message, it just is not entirely the full story. A semi-colon to terminate the preceding statement within a batch is necessary but it is not required every time. If there is a separation between batches and the CTE, then a semi-colon is not required. And never is a semi-colon required to precede a CTE. It is required to terminate a previous statement within the same batch.

Please don’t be led astray by some notions on the web that dictate a CTE must be preceded by a semi-colon.

SQL Server Desired Enhancements

TSQL2sDayHappy Belated Birthday

The monthly Data Professionals blog party has come and gone. It happens the second Tuesday of every month – or at least is supposed to happen on that day. This month, the formidable Chris Yates (blog | twitter) has invited everybody to a birthday party – of sorts. As with many birthdays, there is always somebody that wishes you a happy belated birthday. For this party, it is my turn to offer up that belated birthday. It just so happens there was some coordination between Chris and myself for this belated birthday.

Read all about the invite from Chris’ blog. If you missed the link, here it is again – right here.

Plastic Surgery – Desired Enhancements

SQL Server 2016 has come with a ton of cool features, bells, whistles and well cool stuff (yes redundant). That aside, what are some of the really cool features that I would love to see in SQL Server? Let’s run through them (And yes, I will be a bit greedy. It is standard operating procedure when asking for gifts, right?).

Gift #1

I need some way of being able to reproduce a production database cleanly and efficiently in a different environment. Sure, I can script everything and develop an elaborate process to ensure I got an exact duplicate of the stats, stat steps, stats histogram, schema, procedures, indexes, etc etc etc. Being able to do all of that cleanly and efficiently is the key. This is a pretty big want from clients and could be extremely useful.

Microsoft has heard the pleas. Introduced with SP2 for SQL Server 2014 there is a new DBCC statement to do exactly that – DBCC CloneDatabase. Check out all the details here.

Gift #1, let’s check that off the list.

Gift #2

Instant File Initialization is fantastic and a huge time saver. Unfortunately this only applies to the data files. We need something like this implemented for the transaction log. Currently the transaction log must be “zeroed” or 0-stamped when new space is allocated. This mechanism can delay transactions and impact performance if there happens to be a required file growth or even when trying to manually grow the file or even restore the database.

Believe it or not, Microsoft has addressed this request as well. Microsoft has changed how the transaction log is stamped for a significant performance improvement. This is a part of SQL Server 2016. Bob Dorr explains it very well in his blog post on the topic. You can read his blog post here.

Wow, two for two. We can check gift #2 off the list.

Gift #3

Availability Groups seems to get bogged down under heavy load. The redo and log send seem to get backed up and can have a significant impact on production operations. We need the log transport to be faster.  No, check that. Not just faster it needs to be 2-3x faster.

SQL Server 2016 comes to the rescue again. Amped up on SQL Steroids, Availability Groups has seen a significant improvement in log transport speeds to the secondaries. Some report it as at least twice as fast. The bottleneck has been moved out of the SQL Engine and it has really amped things up from a performance perspective. Here is a supporting article by Jimmy May on the topic – though it doesn’t go deep into the specifics.

Mark another one off the gift registry. Think we can maintain this pace?

Gift #4

Statistics seem to become stale for smaller tables which dramatically affects performance of certain queries. These tables will not see 20% of the rows updated in the leading edge any time before the turn of the year but they would likely change within the six months following the turn of the new year. We need to be able to force these stats to auto-update more regularly without extra intervention.

Fair enough, we already have a trace flag that can help with that (TF 2371). Maybe the environment or management is resistant to having trace flags implemented for something such as this. You never know what the political red tape may dictate.

stackeddeckSQL Server 2016 to the rescue again!!! SQL Server 2016 has this trace flag enabled automatically. You don’t need to do anything extra special. What this means is that those stats on the smaller tables may actually get updated without intervention despite the lack of change to the rows in the table.
That is four for four. Should we take this birthday party to Vegas? Don’t assume I have stacked the deck either! ;0)

Gift #5

I am getting very frustrated with the constant clearing of usage stats every time I rebuild an index. Just because I rebuild the index, it does not mean that I no longer need the usage stats from prior to the index rebuild. I need to be able to see the usage stats for a the time spanning before and after the rebuild without creating a custom process to capture that information. Sure it may not be an insanely difficult task to perform, but it is extra process I have to build out. It’s the principle of the matter.

SQL Server 2016 to the rescue yet again. This age old bug of usage stats being cleared is finally fixed. It is frustrating to say the least to have to deal with this kind of bug. It is a huge relief to have it fixed and be able to get a consistent clear picture of the usage information since the server has been up.

For more information, you could read this article by Kendra Little – here.

Cha-ching. We are now five for five.

resourceflowGift #6

Digging a little deeper on this one. I would really love to see an enhancement to Resource Governor. Not just any enhancement will do. I need it to be enhanced so it will also affect the reporting services engine and the integration services engine in addition to the database engine. I want to be able to use RG to prevent certain reports from over consuming resources within the SSRS engine. Or for that matter, I want to make sure certain SSIS packages do not consume too much memory. If I can implement constraints on resources for these two engines it would be a huge improvement.

We will have to wait for a while on this one. It is currently not scheduled for delivery

Gift #7

This one is going to be a little tougher. It’s not in place. It would be a fantastic gift in my opinion. I would like some tool such as Extended Events to be able to monitor the workload and determine best recommended trace flags to implement.There are many trace flags that reaper_rip_tombstoneare far from well known but could be extremely helpful to production environments based on the workload and internal workflow. Not all would trace flags are built for all environments. An analysis through some automated tool for best recommended flags to implement (again solely at your discretion) would be fantastic.

Gift #8

Get Profiler out of Management Studio finally. Enough said there. There really is no good solid reason in my opinion to keep it around. It is deprecated. It is hardly helpful with 2014 or 2016 and it is just dead weight. Extended Events really is the better way to go here.

Last Request

 

Can we please fix the spelling of JSON? It really needs to be spelled correctly. That spelling is: JASON.

Awesome SQL Server Feature

TSQL2sDayThe second Tuesday of April 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 77th edition of this monthly blog party. This month the host is Jens Vestergaard (blog | twitter) and he insists we do a little soul searching to figure out what about SQL Server really makes our hearts go pitter patter for SQL Server. Ok, so he didn’t really put it that way but you get the point, right? What is it about SQL Server that ROCKS in your opinion?

Well, I think there are a lot of really cool features in SQL Server that ROCK! It really is hard to pick just one feature because there are a lot of really good features that can make life so much easier as a database professional. Then again, there is that topic that bubbles to the top in my articles – a lot. If you haven’t followed my blog, here is a quick clue: click here.

Why is this feature so AWESOME?

Truth be told, there are a ton of reasons why I really like it. Before diving into the why, I need to share an experience.

A client using Microsoft Dynamics AX to manage the Point of Sale (POS) systems for their retail chain has been running into a problem with the POS database at each store. Approximately a year ago, this client had upgrade most of the store databases to SQL Server Standard Edition from Express due to the size restriction of the Express Edition. This SKU upgrade was necessary because the database had grown to exceed 10GB. Most of this growth was explicitly related to the INVENTDIM table consuming 3.5GB of space in the data file.

Right here, you may be asking what the big deal is. Just upgrade the SKU to Standard Edition and don’t worry about the size of the database. I mean, that is an easy fix, right? Sure, that may be perfectly acceptable in an environment with one or maybe even a handful of servers. Imagine a retail chain with more than 120 stores and a database at each store. No extrapolate standard edition licensing costs for all of those stores. Suddenly we are talking a pretty big expense to just upgrade. All of that just because one table chews up 35% of the size limitation of a data file in SQL Server Express Edition.

What if there was an alternative with SQL Express to mitigate that cost and maintain the POS functionality? Enter the SYNONYM! You may recall from a previous post a thing or two that I have said about synonyms in SQL Server. There is good and bad to be had with this feature and most of the bad comes from implementation and not the feature itself.

Using a synonym, I can extend this database beyond the 10GB limitation – or at least that is the proposed theory. To make this work properly, the plan was to create a new database, copy the INVENTDIM table from the POS database to this new database, rename the old INVENTDIM table in the POS database, create a synonym referencing the new table in the new database, and then select from the table to confirm functionality. Sounds easy right? Here is the script that basically goes with that set of steps.

This seems to make a fair amount of sense. Querying the INVENTDIM synonym produces results just as would be expected. Notice that there is one additional step in the script which I did not mention. That step removes unnecessary rows from the INVENTDIM table based on an actual inventory item or barcode for the particular dimension variant related to the item. This helps to trim the table to specific rows related to the retail store available for purchase there. In addition, it serves as a failsafe to get the data down to less than 10GB in case of failure with the synonym.

failedTesting from within SQL Server proved very optimistic. The synonym was working exactly as desired. Next up was to test the change by performing various transactions through the POS.

The solution not only failed, it failed consistently and dramatically. It didn’t even come close. How is this possible? What is Dynamics AX doing that could possibly subvert the synonym implementation? Time to start troubleshooting.

I checked through the logs. Nothing to be found. I checked and validated permissions. No Dice! I checked the ownership chaining. Still no dice! What in the world is causing this failure?

What if I switch to use a view instead of a synonym? I created a view with cross database ownership chains in tact. Test the application again and still failed. What if I use the synonym pointed to a table in the same database? Test from the application and all of a sudden we have success. Now the head-scratching gets a little more intense.

xe_superheroIt is time to get serious. What exactly is the Dynamics AX POS application doing that is leading to failure that does not happen when we query direct from within Management Studio? The means to get serious is to now implement that awesome tool I alluded to previously – Extended Events (XE or XEvents).

With no clues being available from any of the usual sources (including application error messages), XE or profiler is about the only thing left to try and capture the root cause of this failure. Since this happens to be a SQL Server 2014 implementation (yeah I omitted that fact), the only real option in my opinion was to use XE. Truth be told, even on SQL Server 2008 R2, my go to tool is XE. In this case, here is what I configured to try and catch the problem:

With the session running, I had the POS tests begin again. Bang! It failed again, but I expected it and wanted it to fail again. This time around, finding the problem turned out to be really easy. As soon as the error hit, I was able to check the trapped events and see what it was that had been missing and ultimately causing this string of failures.

xe_trappederror_ax

Using the GUI (yeah rare occasion for me with XE), I filtered the events down for display purposes only to make it easier to see what was found by running these tests that was pertinent to the problem. Here is the highlighted text a little larger and easier to see:

Snapshot isolation transaction failed accessing database ‘AxRetailDIM’ because snapshot isolation is not allowed in this database. Use ALTER DATABASE to allow snapshot isolation.

Wow! Light bulb shines bright and the clue finally clicks. The POS databases for this client are all set to allow snapshot isolation. Since this error is coming at the time when the failure occurs in the application, it stands to reason that this is the root cause. Time to test by changing the snapshot isolation setting.

That is a quick change and easy enough to test again. With the XE Session still running, and the change in effect, it’s time to test via the POS application again. To my expectations the application is working now. This is good news! Time to test again and again and again to make sure it wasn’t a fluke that it worked and that it was only going to work just the once.

Not a single failure after the change to allow snapshot isolation. One small change with such a big impact and so few clues to be found except in that super Awesome Super Hero feature of SQL Server called Extended Events!

Being able to quickly find the root cause of so much pain is why I enjoy working with the Extended Events feature. It is an efficient way to find a ton of information while causing little overhead to the server.

The bonus here is that XE allowed us to pinpoint a problem with the proposed solution to help save costs while extending a database beyond the 10GB limitation of SQL Express.

Note: I left some notes in the XE session script. These notes help to point out differences between implementing this particular session on SQL Server 2012 (or later) and SQL Server 2008 (or R2).

SQL Server Select – Back to Basics

Comments: No Comments
Published on: April 1, 2016

Prelude in SQL Minor

translucentman_greenBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics.

SQL Server Select

atlasian_manOf all the fundamental concepts within SQL Server, nothing is potentially more basic and fundamental than the often overlooked Select statement. Think about that statement for just a moment.

How often is the select statement overlooked and under-analyzed? Yet, this tiny little word is a rather large part of the transactions that occur within each instance of SQL Server on a routine basis.

Today, I want to break down the SQL Select statement and show the components of the statement along with the logical processing order of those components.

Let’s put on our thinking caps for a moment and think of all the possible components of this super fundamental piece of the TSQL language – the Select statement.

Anatomy of the Select

The first important piece of information with the Select is to understand what it does. What is a Select statement? What is the purpose of a Select? Really, the purpose of this statement should make sense based on the word alone – to select. In other words, to get or retrieve something that is of interest to you at that moment. In our case, that means to retrieve some piece of data from the database to meet our needs. Very simple, right? You issue a SELECT statement to get something out of the database.

Thinking through the Select statement, certain common elements probably stick out pretty easily. These elements might include the following:

  • Select
  • From
  • Join
  • Where

Then with a slightly more complex query, one might see the following components associated with a Select statement.

  • Top
  • Distinct
  • Group By
  • Order By

Then stepping it up to another level again, one might see the following:

  • Having
  • For
  • Into
  • With (Cube or Rollup)

Soak that in for a moment. Suddenly this fundamental piece of TSQL is starting to look a little more like black voodoo magic. It can get rather complex with this fundamental statement. Now soak in what the purpose of the SELECT is, as was stated just a bit ago. That purpose is to retrieve data that you need. One cannot always retrieve the desired data without some options to go with the SELECTion criteria. Suddenly, this complexity renders itself more as raw DBA power.

Knowing the various aspects of the Select, and not diving too far into what each does, you may be wondering how these pieces fit together into a query. Let’s take a quick look at a few different examples illustrating how these puzzle pieces fit together into a SELECT statement.

Here is an example showing a SELECT that utilizes the FROM, WHERE, GROUP BY, and ORDER BY clauses. When writing a SELECT statement, we will write the query in the order shown in the query. Using these clauses in a different location doesn’t read very well to humans and will throw a syntax error.

In this next example, I have used a few different clauses:

The real differences here being that I have used the DISTINCT, TOP and JOIN clauses. Again, trying to write this query with the TOP preceding the TOP or maybe with the JOIN preceding the top doesn’t really make much sense when reading it. Nor would it work if we tried to execute the query due to syntax errors.

This next example uses another option to output the results into an XML document using the FOR clause:

And in this last example, I take the previous example to output the results into a temp table.

These are all examples of the possibilities that can be used with the SELECT statement to retrieve data from the database in order to meet our needs. Each one illustrates different variations and some level of complexities between each of the examples.

Now the question is, did you know that SQL Server does not process these statements in the same syntactic sequence or English friendly forms that we write these SELECT statements? You may be surprised to learn that there is a different processing order to these pieces of the SELECT. This processing order is called the LOGICAL Processing order.

Let’s take one of the queries already shown and compare.

The query as we may write it will look like this:

Anatomy_select

The query as SQL Server will process it will look like this:

anatomy_logop

Or in other words, we may see it as shown in this video:

That is significantly different than we might think, but it makes sense if you think about it long enough.

How does one know that SQL Server processes these steps in this order? Well, besides the documentation in BOL that supports this, one can dive into the internals and generate a tree that will show this in action. For instance, let’s use the same query as just shown and see what the processing tree would look like.

select_tree_map

Besides looking very colorful and almost as if it were in Greek, this processing tree is somewhat revealing in how SQL Server breaks down that SELECT statement we have been using. One more pic, and then a quick explanation.

select_showplan_text

This last image was generated using the SHOWPLAN_TEXT setting. I want to show this because it helps to reinforce a concept from the processing tree. In this last image, note that the OUTER REFERENCE is to the SalesPerson table. This also means (without stating it) that the inner reference is to the Employee table. I have added the INNER and OUTER table references to the image of the processing tree. Why is this important? Well, when trying to interpret the tree it is important to understand that one starts from the inner most node and then read from bottom to top, right to left (as you ascend the nodes).

In this case, we will see that the Employee table is first referenced and processed, then the SalesPerson table, then the ON clause is evaluated. In the tree, I highlighted the ON that pertains to the join in bright green. The vertical dotted lines on the left are to help visually keep the nodes aligned as you step in/out from one level to the next. As we climb the tree to the top left corner, we will see that the last piece of the select to be processed in this case is the TOP statement. We can also see that the top is dependent on the ORDER BY (highlighted in RED) in order to get the correct desired rows.

Recap

This article jumps through the SELECT statement very briskly. The SELECT statement is a powerful tool within SQL Server. Gaining an understanding how the SELECT statement is processed can be helpful to write an appropriate SELECT and how to potentially pseudo code that SELECT. If you can understand that the engine first determines the source of the data and then starts to break down the additional requirements that have been sent with the SELECT. The more options that are thrown in with the SELECT statement, the larger and more complex the processing tree becomes for that statement.

Used well, a SELECT will become your ally and a handy tool in the toolbox. This has been an introduction to the SELECT and how it is processed behind the scenes. If you would like to see more information about the different clauses related to the select statement, check out the BOL entry here.

As an aside, the tree that was displayed is called the “converted tree.” Here is a listing of the other possible trees.

optimizertrees

SQL Agent Job Schedules

Many moons ago, I posted a script to help report on the human friendly scheduling sched_reportinformation for SSRS subscriptions. You can read all about that here.

In a very similar fashion, today I want to share how easy it is to turn the internal schedules for SQL Agent jobs into something that is more human friendly. SQL Agent job schedules should be easy to read for humans if for nothing else than the fact that you have to provide this information to the auditors from time to time. We all know they can’t properly interpret the numeric job schedules and properly determine what they mean.

The job schedules work quite well with the current design – which is not human friendly. The use of numbers is a streamlined approach and far more efficient. It may be a bit more useful to us if there were some sort of translation table that was stored. A stored translation table would make it terribly easy to figure out what is happening within the schedules for the SQL Agent jobs. We don’t have that, so we often have to do a little trial and error. Or we rely on people that have done that trial and error – with a little verification on our part. Afterall, these scripts are posted on the internet.

SQL Agent Job Schedules

I am not going to break down the script. With the similarity to the aforementioned SSRS script, I would recommend looking there for a little bit of an explanation. Here is the script to retrieve SQL Agent Job Schedules from your MSDB instance.

This script will only retrieve the SQL Agent Job schedules that are not SSRS related and that are not SSIS maintenance package related. The SSRS stuff has already been introduced, and a quick eval of those blog posts will show the differences between grabbing SSRS information vs. grabbing just SQL Agent Jobs.

Stay tuned. I will post a script for the SSIS related scheduling information.

Now, go forth and audit your agent jobs. Document the schedules in an easy to read format. Give that report / document to the auditors and your manager and look like a rock star for the day!

Index Cannot Be Reorganized…

sunburst_spaceWorking diligently as any good DBA might, you have established maintenance routines for each of the SQL Servers under your purview.

As a part of this maintenance you have scripted solutions to intelligently manage and maintain the fragmentation levels for each of the indexes within each database on each instance.

To further show how diligent you are as a DBA, the outcomes of each maintenance run are logged and you review the logs each morning. This routine helps keep you on top of everything that is happening within the environment.

For months, maybe even years, things are running smoothly. Never a failure. Never an error. Just routine log review day after day. Then one day it happens – there is an error. The index maintenance script failed one night.

Index Cannot be Reorganized…

You receive the error message similar to the following:

Msg 2552, Level 16, State 1, Line 1 The index “blah” (partition 1) on table “blah_blah_blah” cannot be reorganized because page level locking is disabled

Immediately, you start double-checking yourself and verifying that it worked the previous night. You even go so far as to confirm that the same index was previously reorganized. How is it possible that it is failing now on this index. What has changed? Has something changed?

Time for a little digging and investigating, so the dirty work begins. On one side of the coin you are relieved to be able to do something different. On the other side of that coin, you are rather annoyed that something seems to have changed. These feelings are perfectly normal!

First things first – you investigate the indexes in question to confirm what the error is saying. This is easily done with a query such as the following:

Scrolling through the results, you notice (eventually) that the IX_SpecialOfferProduct_ProductID in the AdventureWorks2014 database has page locks disabled. You are absolutely certain that this index was created allowing page locks. Pondering the problem for a moment, you recall having read about the default trace (there are several articles on the default trace – here) and the thought occurs to try and see if there is a breadcrumb there about the change. Using the query from that default trace article, a picture starts to unscramble. Here is that query reposted and a snippet of the results:

index_deftrace_audit

This is a great start. Not seen in the results is the timestamp showing when it was done – which was due solely to snipping. Also not shown is the text of the statement that was run for those three events. So close, yet so far away. This is not quite enough to have the smoking gun evidence to show Jason (me) that I did something wrong and unauthorized. What to do now?

All is not lost yet! Your stunning memory kicks in and you recall several articles about using Extended Events to audit server and database changes. Better yet, you recall that you deployed an XE session to the server where this error occurred. And yes, you are slightly embarrassed that you failed to remove the XE session after fiddling with it. We won’t tell anybody that you deployed a test XE session to a production server as if it were your sandbox. The session currently deployed is trapping all object changes unlike the following session that has it filtered down to just objects that are indexes.

You query the trace file with a query like this:

Wow! Look at the results! There is a smoking gun finally.

index_xe_audit

Two results happen to pin the root cause of the change squarely on my shoulders. Yup, Jason changed that index to disallow page locks. I highlighted three different areas of interest in the results. The yellow and green indicate the DDL phase. One row for the start of the statement, and another row for the commit of that statement. The red highlight shows me the exact change that was made to this index. This is all very good info!

What Now?

It really is great to have the smoking gun. If something is broke and it worked previously, it is essential to find the root cause. With a root cause under the belt, what needs to be done to fix the failure? That is a little bit easier that finding the root cause. That is, unless there is a technical reason for the index to no longer allow page locks (maybe that smoking gun is less of a smoking gun and more like baby spittle after all).

Here is how you fix the problem:

But, But, But…

The Extended Events session would be very noisy and capture every alter index statement, right? It should capture statements like the following, right?

The answer to that question is: Yes, Yes, Yes. These statements are all captured due to the use of the ALTER statement. Here are the results of the XE session with all of these scripts having been executed:

index_xe_audit2

If you want to audit when the indexes are changing and how they are changing, this will do just the trick. If there are a ton of changes, then be prepared for a deluge of information.

«page 1 of 54








Calendar
July 2016
M T W T F S S
« Apr    
 123
45678910
11121314151617
18192021222324
25262728293031
Content
SQLHelp

SQLHelp


Welcome , today is Friday, July 29, 2016