SQL Server Configurations – Back to Basics

One thing that SQL Server does very well is come pre-configured in a lot of ways. These pre-configured settings would be called defaults. Having default settings is not a bad thing nor is it necessarily a good thing.

For me, the defaults lie somewhere in the middle ground and they are kind of just there. You see, having defaults can be good for a horde of people. On the other hand, the default settings can be far from optimal for your specific conditions.

The real key with default settings is to understand what they are and how to get to them. This article is going to go through some of the basics around one group of these defaults. That group of settings will be accessible via the sp_configure system stored procedure. You may already know some of these basics, and that is ok.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Some Assembly Required…

Three dreaded words we all love to despise but have learned to deal with over the past several years – some assembly required. More and more we find ourselves needing to assemble our own furniture, bookcases, barbecue grills, and bathroom sinks. We do occasionally want some form of set and forget it.

The problem with set it and forget it type of settings (or defaults) is as I mentioned – they don’t always work for every environment. We do occasionally need to manually adjust settings for what is optimal for that database, server, and/or environment.

When we fail to reconfigure the defaults, we could end up with a constant firefight that we just don’t ever seem to be able to win.

So how do we find some of these settings that can help us customize our environment for the better (or worse)? Let’s start taking a crack at this cool procedure called sp_configure! Ok, so maybe I oversold that a bit – but there is some coolness to it.

Looking at msdn about sp_configure I can see that it is a procedure to display or change global configuration settings for the current server.

If I run sp_configure without any parameters, I will get a complete result set of the configurable options via this procedure. Let’s look at how easy that is:

Ok, so that was exceptionally easy. I can see that the procedure returns the configurable settings, the max value for the setting, configured value, and the running value for each setting. That is basic information, right? If I want a little more detailed information, guess what? I can query a catalog view to learn even more about the configurations – sys.configurations.

That query will also show me (in addition to what I already know from sp_configure) a description for each setting, if the setting is a dynamic setting and whether or not the setting is an advanced configuration (and thus requires “show advanced options” to be enabled). Pro-tip: The procedure just queries the catalog view anyway. Here is a snippet from the proc text.

Seeing that we have some configurations that are advanced and there is this option called “show advanced options”, let’s play a little bit with how to enable or disable that setting.

With the result (on my system) being:

We can see there that the configuration had no effect because I already had the setting enabled. Nonetheless, the attempt to change still succeeded. Let’s try it a different way.

I ran a whole bunch of variations there for giggles. Notice how I continue to try different words or length of words until it finally errors? All of them have the same net effect (except the attempt that failed) they will change the configuration “show advanced options”. This is because all that is required (as portrayed in the failure message) is that the term provided is enough to make it unique. The uniqueness requirement (shortcut) is illustrated by this code block from sp_configure.

See the use of the wildcards and the “like” term? This is allowing us to shortcut the configuration name – as long as we use a unique term. If I select a term that is not unique, then the proc will output every configuration option that matches the term I used. From the example I used, I would get this output as duplicates to the term I used.

Ah, I can see the option I need! I can now just copy and paste that option (for the sake of simplicity) into my query and just proceed along my merry way. This is a great shortcut if you can’t remember the exact full config name or if you happen to be really bad at spelling.

The Wrap

In this article I have introduced you to some basics in regards to default server settings and how to quickly see or change those settings. Not every environment is able to rely on set-it and forget-it type of defaults. Adopting the mentality that “some assembly is required” with your environments is a good approach. It will help keep you on top of your configurations at the bare minimum. This article will help serve a decent foundation for some near future articles. Stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Common Tempdb Trace Flags – Back to Basics

Once in a while I come across something that sounds fun or interesting and decide to dive a little deeper into it. That happened to me recently and caused me to preempt my scheduled post and work on writing up something entirely different. Why? Because this seemed like fun and useful.

So what is it I am yammering on about that was fun?

I think we can probably concede that there are some best practices flying around in regards to the configuration of tempdb. One of those best practices is in regards to two trace flags within SQL Server. These trace flags are 1117 and 1118. Here is a little bit of background on the trace flags and what they do.

A caveat I have now for the use of trace flags is that I err on the same side as Kendra (author of the article just mentioned). I don’t generally like to enable trace flags unless it is very warranted for a very specific condition. As Kendra mentions, TF 1117 will impact more than just the tempdb data files. So use that one with caution.

Ancient Artifacts

With the release of SQL Server 2016, these trace flags were rumored to be a thing of the past and hence completely unnecessary. That is partially true. The trace flag is unneeded and SQL 2016 does have some different behaviors, but does that mean you have to do nothing to get the benefits of these Trace Flags as implemented in 2016?

As it turns out, these trace flags no longer do what they did in previous editions. SQL Server now pretty much has it baked into the product. Buuuuut, do you have to do anything slightly different to make it work? This was something I came across while reading this post and wanted to double check everything. After all, I was also under the belief that it was automatically enabled. So let’s create a script that checks these things for me.

Holy cannoli batman – that is more than a simple script, right? Well, it may be a bit of overkill. I wanted it to work for version before and after and including SQL Server 2016 (when these sweeping changes went into effect). You see, I am checking for versions where the TF was required to make the change and also for versions after the change where the TF has no effect. In 2016 and later, these settings are database scoped and the TF is unnecessary.

The database scoped settings can actually be queried in 2016 more specifically with the following query.

In this query, I am able to determine if mixed_page_allocations and if is_autogrow_all_files are enabled. These settings can be retrieved from sys.databases and sys.filegroups respectively. If I run this query on a server where the defaults were accepted during the install, I would see something like the following.

You can see here, the default settings on my install show something different than the reported behavior. While autogrow all files is enabled, mixed_page_allocations is disabled. This matches what we expect to see by enabling the Trace Flags 1117 and 1118 – for the tempdb database at least. If I look at a user database, I will find that mixed pages is disabled by default still but that autogrow_all_files is also disabled.

In this case, you may or may not want a user database to have all data files grow at the same time. That is a great change to have implemented in SQL Server with SQL 2016. Should you choose to enable it, you can do so on a database by database basis.

As for the trace flags? My query checks to see if maybe you enabled them on your instance or if you don’t have them enabled for the older versions of SQL Server. Then the script generates the appropriate action scripts and allows you to determine if you want to run the generated script or not. And since we are changing trace flags (potentially) I recommend that you also look at this article of mine that discusses how to audit the changing of trace flags. And since that is an XEvent based article, I recommend freshening up on XEvents with this series too!

The Wrap

In this article I have introduced you to some basics in regards to default behaviors and settings in tempdb along with some best practices. It is advisable to investigate from time to time some of these recommendations and confirm what we are really being told so we can avoid confusion and mis-interpretation.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Changing Default Logs Directory – Back to Basics

Every now and then I find a topic that seems to fit perfectly into the mold of the theme of “Back to Basics”. A couple of years ago, there was a challenge to write a series of posts about basic concepts. Some of my articles in that series can be found here.

Today, my topic to discuss is in regards to altering the default logs directory location. Some may say this is no big deal and you can just use the default location used during install. Fair enough, there may not be massive need to change that location.

Maybe, just maybe, there is an overarching need to change this default. Maybe you have multiple versions of SQL Server in the enterprise and just want a consistent folder to access across all servers so you don’t have to think too much. Or possibly, you want to copy the logs from multiple servers to a common location on a central server and don’t want to have to code for a different directory on each server.

The list of reasons can go on and I am certain I would not be able to list all of the really good reasons to change this particular default. Suffice it to say, there are some really good requirements out there (and probably some really bad ones too) that mandate the changing of the default logs directory to a new standardized location.

Changes

The logs that I am referring to are not the transaction logs for the databases – oh no no no! Rather, I am referring to the error logs, the mini dumps, and the many other logs that may fall into the traditional “logs” folder during the SQL Server install. Let’s take a peek at a default log directory after the install is complete.

I picked a demo server that has a crap load of stuff available (and yeah not so fresh after install) but where the installation placed the logs by default. You can see I have traces, default XE files, some SQL logs, and some dump files. There is plenty going on with this server. A very fresh install would have similar files but not quite as many.

If I want to change the Log directory, it is a pretty easy change but it does require a service restart.

In SQL Server Configuration Manager, navigate to services then to “SQL Server Service”. Right click that service and select properties. From properties, you will need to select the “Startup Parameters” tab. Select the parameter with the “-e” and errorlog in the path. Then you can modify the path to something more appropriate for your needs and then simply click the update button. After doing that, click the ok button and bounce the SQL Service.

After you successfully bounce the service, you can confirm that the error logs have been migrated to the correct folder with a simple check. Note that this change impacts the errorlogs, the default Extended Events logging directory, the default trace directory, the dumps directory and many other things.

See how easy that was? Did that move everything over for us? As it turns out, it did not. The old directory will continue to have the SQL Agent logs. We can see this with a check from the Agent log properties like the following.

To change this, I can execute a simple stored procedure in the msdb database and then bounce the sql agent service.

With the agent logs now writing to the directory verified after agent service restart as shown here.

At this point, all that will be left in the previous folder will be the files that were written prior to the folder changes and the service restarts.

The Wrap

In this article I have introduced you to an easy method to move the logs for SQL Server and the SQL Server Agent to a custom directory that better suits your enterprise needs. This concept is a basic building block for some upcoming articles – stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

12 Days Of Christmas and SQL

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 26, 2017

One of my all-time favorite times of the year happens to be the Christmas Season. I enjoy the season because it is supposed to remind us to try and be better people. And for me, it does help. In all honesty, it should be a better effort year round, but this is a good time of year to try and get back on track and to try and focus more on other more important things.

For me, one of the more important things is to try and help others. Focusing on other people and their needs helps them but also helps one’s self. It is because of the focus on others that I enjoy, not just Christmas Day, but also the 12 Days of Christmas.

The 12 Days of Christmas is about giving for 12 Days. Though, in this day and age, most view it as a span of 12 Days in which they are entitled to receive gifts. If we are giving for a mere 12 Days and not focusing on receiving, then wouldn’t we all be just a little bit happier? I know that when I focus more on the giving I am certainly happier.

Giving

In the spirit of the 12 Days of Christmas and Giving, I have a 12 Day series that I generally try to do each Holiday Season. The series will generally begin on Christmas day to align with the actual 12 Days of Christmas (rather than the adopted tradition of ending on Christmas). This also means that the series will generally end on the celebration of “Twelfth Night” which is January 5th.

Each annual series will include several articles about SQL Server and have a higher goal of trying to learn something more about SQL Server. Some articles may be deep technical dives, while others may prove to be more utilitarian with a script or some functionality that can be quickly put to use and frequently used. Other articles may just be for fun. In all, there will be several articles which I hope will bring some level of use for those that read while they strive to become better at this thing called SQL Server.

This page will serve as a landing page for each of the annual series and will be updated as new articles are added.

2017

  1. XE Permissions – 25 December 2017
  2. Best New(ish) SSMS Feature – 26 December 2017
  3. XE System Messages – 27 December 2017
  4. Correlate Trace and XE Events – 28 December 2017
  5. Audit Domain Group and User Permissions – 29 December 2017
  6. An Introduction to Templates – 30 December 2017
  7. Failed to Create the Audit File – 31 December 2017
  8. Correlate SQL Trace and Actions – 1 January 2018
  9. Dynamics AX Event Session – 2 January 2018
  10. Sharepoint Diagnostics and XE – 3 January 2018
  11. Change Default Logs Directory – 4 January 2018
  12. Common Tempdb Trace Flags – Back to Basics (Day of Feast) – 5 January 2018

2015

  1. Failed – 25 December 2015
  2. Failed – 26 December 2015
  3. Failed – 27 December 2015
  4. Failed – 28 December 2015
  5. Failed – 29 December 2015
  6. Log Files from Different Source – 30 December 2015
  7. Customize XEvent Log Display – 31 December 2015
  8. Filtering Logged Data – 1 January 2016
  9. Hidden GUI Gems – 2 January 2016
  10. Failed – 3 January 2016
  11. Failed – 4 January 2016
  12. A Day in the Stream – 5 January 2016

2013

  1. Las Vegas Invite – 25 December 2013
  2. SAN Outage – 26 December 2013
  3. Peer to Peer Replication – 27 December 2013
  4. Broken Broker – 28 December 2013
  5. Peer Identity – 29 December 2013
  6. Lost in Space – 30 December 2013
  7. Command N Conquer – 31 December 2013
  8. Ring in the New Year – 1 January 2014
  9. Queries Going Boom – 2 January 2014
  10. Retention of XE Session Data in a Table – 3 January 2014
  11. Purging syspolicy – 4 January 2014
  12. High CPU and Bloat in Distribution – 5 January 2014

2012 (pre-Christmas)

  1. Maint Plan Logs – 13 December 2012
  2. Service Broker Out of Control – 14 December 2012
  3. Backup, Job and Mail History Cleanup – 15 December 2012
  4. Exercise for msdb – 16 December 2012
  5. Table Compression – 17 December 2012
  6. Maintenance Plan Gravage – 18 December 2012
  7. Runaway Jobs – 19 December 2012
  8. SSRS Schedules – 20 December 2012
  9. Death and Destruction, err Deadlocks – 21 December 2012
  10. Virtual Storage – 22 December 2012
  11. Domain Setup – 23 December 2012
  12. SQL Cluster on Virtual Box – 24 December 2012

Seattle SQL Pro Workshop 2017 Schedule

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 26, 2017

db_resuscitateSeattle SQL Pro Workshop 2017

You may be aware of an event that some friends and I are putting together during the week of PASS Summit 2017. I have created an Eventbrite page with all the gory details here.

With everybody being in a mad scramble to get things done to pull this together, the one task we left for last was to publish a schedule. While this is coming up very late in the game, rest assured we are not foregoing some semblance of order for the day. 😉 That said, there will still be plenty of disorder / fun to be had during the day.

So the entire point of this post is to publish the schedule and have a landing page for it during the event. *

Session Start Duration Presenter Topic
Registration 8:30 AM All
Intro/Welcome 9:00 AM 10 Jason Brimhall  
1 9:10 AM 60 Jason Brimhall Dolly, Footprints and a Dash of EXtra TimE
Break 10:10 AM 5    
2 10:15 AM 60 Jimmy May Intro to Monitoring I/O: The Counters That Count
Break 11:15 AM 5    
3 11:20 AM 60 Gail Shaw Parameter sniffing and other cases of the confused optimiser
Lunch 12:20 PM 60   Networking /  RG
4 1:20 PM 60 Louis Davidson Implementing a Hierarchy in SQL Server
Break 2:20 PM 5    
5 2:25 PM 60 Andy Leonard Designing an SSIS Framework
Break 3:25 PM 5    
6 3:30 PM 60 Wayne Sheffield What is this “SQL Inj/stuff/ection”, and how does it affect me?
Wrap 4:30 PM 30   Swag and Thank You
END 5:00 PM Cleanup

*This schedule is subject to change without notice.

Seattle SQL Pro Workshop 2017

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 19, 2017

Seattle SQL Pro Workshop 2017

October is a great time of year for the SQL Server and Data professional. There are several conferences but the biggest happens to be in the Emerald City – Seattledb_resuscitate

Some friends and I have come together the past few years to put on an extra day of learning leading up to this massive conference. We call it the Seattle SQL Pro Workshop. I have created an Eventbrite page with all the gory details here.

That massive conference I have mentioned – you might have heard of it as well. It is called PASS Summit and you can find out a wealth of info from the website. Granted there are plenty of paid precon events sanctioned by PASS, we by no means are competing against them. We are trying to supplement the training and offer an extra avenue to any who could not attend the paid precons or who may be in town for only part of the day on Tuesday.

This year, we have a collision of sorts with this event. We are holding the event on Halloween – Oct 31, 2017. With it being Halloween, we welcome any who wish to attend the workshop in FULL costume.

So, what kinds of things will we cover at the event? I am glad you asked. Jimmy May will be there to talk about IO. Gail Shaw will be talking about the Query Optimizer (QO). Louis (Dr. SQL) will be taking us deep into Hierarchies. Andy Leonard will be exploring BIML and Wayne Sheffield will be showing us some SQL Injection attacks.

That is the 35,000 foot view of the sessions. You can read more about them from the EventBrite listing – HERE. What I do not yet have up on the is what I will be discussing.

My topic for the workshop will be hopefully something as useful and informative as the cool stuff everybody else is putting together. I will be sharing some insights about a tool from our friends over at Red-Gate that can help to change the face of the landscape in your development environments. This tool as illustrated so nicely by my Trojan Sheep, is called SQL Clone.

I will demonstrate the use of this tool to reduce the storage footprint required in Dev, Test, Stage, QA, UAT, etc etc etc. Based on client case study involving a 2TB database, we will see how this tool can help shrink that footprint to just under 2{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170} – give or take. I will share some discoveries I met along the way and I even hope to show some internals from the SQL Server perspective when using this technology (can somebody say Extended Events to the Rescue?).

Why Attend?

Beyond getting some first rate training from some really awesome community driven types of data professionals, this is a prime opportunity to network with the same top notch individuals. These people are more than MVPs. They are truly technical giants in the data community.

This event gives you an opportunity to learn great stuff while at the same time you will have the chance to network on a more personal level with many peers and professionals. You will also have the opportunity to possibly solve some of your toughest work or career related problems. Believe me, the day spent with this group will be well worth your time and money!

Did I mention that the event is Free (with an optional paid lunch)?

Seattle SQL Pro Workshop 2016

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 23, 2016

db_resuscitateSeattle SQL Pro Workshop 2016

You may be aware of an event that some friends and I are putting together during the week

of PASS Summit 2016. I have listed the event details within the EventBrite page here.

As we near the actual event, I really need to get the schedule published (epic fail in getting it out sooner).

So the entire point of this post is to publish the schedule and have a landing page for it during the event.

Session Start Duration Presenter Topic
Registration 8:30 AM All
Intro/Welcome 9:00 AM 10 Jason Brimhall  
1 9:10 AM 60 Grant Fritchey Azure with RG Data Platform Studio
Break 10:10 AM 5    
2 10:15 AM 60 Tjay Belt PowerBI from a DBA
Break 11:15 AM 5    
3 11:20 AM 60 Wayne Sheffield SQL 2016 and Temporal Data
Lunch 12:20 PM 60   Networking /  RG
4 1:20 PM 60 Chad Crawford Impact Analysis – DB Change Impact of that Change
Break 2:20 PM 5    
5 2:25 PM 60 Gail Shaw Why are we Waiting?
Break 3:25 PM 5    
6 3:30 PM 60 Jason Brimhall XEvent Lessons Learned from the Day
Wrap 4:30 PM 30   Swag and Thank You
END 5:00 PM Cleanup

You Deserve to be an MVP

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 25, 2016

I have been sitting on this article for a while now. I have been tossing around some Microsoft_MVP_logo_thumb
thoughts and finally it is time to share some of those thoughts with the masses. I hope to provoke further thought on the topic of being an MVP.

I want to preface all of these thoughts first by saying that I believe there are many great people out there who are not an MVP who deserve to be an MVP. These are the types of people that do a lot for the community and strive to bring training and increased knowledge to more people in various platforms under the Microsoft banner.

Now for some obligatory information. While it is true I am an MVP, I feel obligated to remind people that I have zero (yup that is a big fat zero) influence over the MVP program. I am extremely grateful for the opportunity to retain the position of MVP along with all of the rest of the MVP community (there are a few of us out there). Not only am I grateful to the program for allowing me in, I am also grateful to all of those that nominated me.

Work – and lots of it!

mvp_banner

One of the first things that strikes me is the nomination process for the MVP program. There are two parts to the process. The easy part comes from the person making the nomination. That said, if you are nominating somebody or if you are asking somebody to nominate you, read this guide from Jen Stirrup. Jen has listed a bunch of work that has to be done on the part of nominator. Or is it work for the person making the nomination?

When you really start thinking about it, the nominee is really the person that needs to do a fair amount of work. Yes, it is a good amount of work to do. Then again, maybe it is not very much work for you at all.

One of the things that really bugs me about the process is all of this work. Not specifically that I get the opportunity to do it. No, more specifically that there seems to be a growing trend in the community of entitlement. I feel that far too many people, that do a lot within the community, feel they are entitled to being accepted into the MVP program. And of course there are others that do much less and also exhibit the same sentiment.

Entitled?

When you feel you deserve to be an MVP, are you prepared to do more work? I have heard from more than one source that they will not fill out all the extra information requested when they are nominated. The prevailing reason here being that they are entitled, because they do some bit of community work, to be automatically included. Another prevailing sentiment, around this extra work, is that Microsoft should already be tracking the individual and know everything there is to know about the contributions of said individual.

These sentiments couldn’t be further from the fact. If you are thinking along the lines of either of these sentiments, you are NOT an MVP. There are a ton of professionals in the world doing a lot of community activities who are just as deserving of becoming an MVP. It long_resumeis hardly plausible for Microsoft to track every candidate in the world. Why not tell them a bit about yourself?

RESUME / CV

When applying for a job, how do you go about applying for that job? Every job for which I have ever applied, I have needed to fill out an application as well as send a resume to the employer. I hardly think any employer would hire me without knowing that I am interested in the job.

That sounds fantastic for a job right? Being an MVP surely has no need to send a resume for that, is there? Well, technically no. However, if you treat your community work like you would treat any other experience you have, you may start to see the need for the resume just a touch more. When nominated, you are requested to provide a lot of information to Microsoft that essentially builds your resume to be reviewed for the MVP program.

One of the prevailing sentiments I have heard from more than one place is that filling out all of this information is just bragging on yourself. That sentiment is not too far from reality. Just like any resume, you have to highlight your experiences, your accomplishments and your skills. Without this kind of information, how could Microsoft possibly know anything about you? Do you have the paparazzi following you and sending the information along to Microsoft for you? If you do, then why even bother with the MVP program? Your popularity is probably on a bigger scale than the MVP program if you have your own paparazzi.

Invest in your Professional Self

resume_wordcloudThe more effort you put into your candidate details the better chance you have at standing out within the review process. Think about it this way, would you turn in a piece of paper with just your name on it for a job? Or…would you take hours to invest in your personal self and produce a good resume that will stand out in the sea of resumes that have been submitted?

If you ask me to submit you as an MVP and I do, I would hope that you complete your MVP resume (candidate profile) and submit it to Microsoft. If you don’t take the time to do that, then I would find it hard to ever submit you again. The refusal to fill out that information speaks volumes to me and says either you are not interested or think too much of yourself for the MVP program.

Leadership

One of the attributes of an MVP is that of leadership. A simple measure of leadership actually falls into the previous two sections we just covered. If you are contributing to the community, that would be one small form of leadership. If you are willing to follow, that is also a form of leadership. If you are able to complete your information and submit it, then that is also an attribute of leadership.

Leaders demonstrate their leadership by being able to take direction, teaching others (community work), completing tasks when necessary, and reporting back up to their superiors on successes and failures (the last two can be attached to the completion of the nomination data).

Don’t believe me about leadership being an attribute of an MVP? Take a gander at this snippet from my last renewal letter. Highlighted in red is the pertinent sentence.

MVPrenew15_leader

You can run the phrase through a translator or take my word for it that it pertains to exceptional leaders in the technical community.

It’s not a Job though

I am sure some of the pundits out there would be clamoring that if the MVP program were an actual job, then they would perform all of the extra work. I have two observations for this: 1) it speaks to the persons character and 2) MVP really is more like a job than you may think.

The MVP program is not a paid job and probably falls more into the realm of volunteering back2workthan a paid job. Despite that, if you treat it more like a job with full on responsibilities you will have greater success in getting accepted and you will have a greater sense of fulfillment. Additionally, you will get further along with more opportunities within the MVP program just like a traditional job.

Just like a traditional job, there are responsibilities, non-disclosures, internal communications, and annual reviews. Did any of those terms raise your eyebrow? The community contribution paperwork does not end with becoming an MVP – that is just the job application / resume. Every year, you have to provide an annual review. This review is a recap of the entire year with your personal accomplishments and is basically a self-review that would be provided to the manager. I am sure you are familiar with the process of providing a self-review to document reasons why you should remain employed or even get a raise.

Non-traditional Job

As with a regular job, you must continue to accomplish something in order to maintain the position. The accomplishments can come in any form of community contribution such as blogs, speaking, mentoring, or podcasts (as examples). What is not often realized is that this takes time. Sometimes, it takes a lot of time. When you consider the time as a part of your effort, I hope you start to realize that being an MVP really is a lot like a part time job (and a full time job in some cases).

When we start talking about being an MVP in quantity of hours contributed and tasks accomplished, it is not hard to see it as a job. So if it really is just like a job, how much time are you willing to invest in the documentation for this award? Is it at least comparable to the time you invest in documenting your professional career when applying for a paying job? If you don’t take that kind of pride or effort in documenting your worth to your personal career development, then I dare say you need to rethink your approach and re-evaluate whether you should be an MVP candidate.

Being an MVP is not just an award – it is a commitment to do more for the community!

Adventures with NOLOCK

Categories: News, Professional, SSC
Comments: 4 Comments
Published on: June 15, 2015

Some of the beauty of being a database professional is the opportunity to deal with our friend NOLOCK.  For one reason or another this query directive (yes I am calling it a directive* and not a hint) is loved and idolized by vendors, applications, developers, and upper management alike.  The reasons for this vary from one place to the next, as I have found, but it always seems to boil down to the perception that it runs faster.

And yes, queries do sometimes run faster with this directive.  That is until they are found to be the head blocker or that they don’t run any faster because you can write good TSQL.  But we are not going to dive into those issues at this time.

A gem that I recently encountered with NOLOCK was rather welcome.  Not because of the inherent behavior or anomalies that can occur through the use of NOLOCK, but rather because of the discovery made while evaluating an execution plan.  Working with Microsoft SQL Server 2012 (SP1) – 11.0.3128.0 (X64) , I came across something that I would rather see more consistently.  Let’s take a look at this example execution plan:

NoLock

 

First is a look at the plan to see if you can see what I saw.

Read Uncommitted

 

And now, we can see it clear as day.  In this particular case, SQL Server decided to remind us that the use of this directive allows uncommitted reads to occur so it throws that directive into the query text of the execution plan as well.  This is awesome!  In short, it is a visual reminder that the use of the NOLOCK directive, while it may be fast at times, is a direct route to potentially bad results by telling the database engine to also read uncommitted data.

How cool is that?  Sadly, I could only reproduce it on this one version of SQL Server so far.  If you can reproduce that type of behavior, please share by posting to the comments which version and what you did.  For me, database settings and server settings had no effect on this behavior.  No trace flags were in use, so no effect there either.  One thing of note, in my testing, this did not happen when querying against a table direct but did only occur when querying against a view (complexity and settings produced no difference in effect for me).

* I would like to make it a goal for every database professional to call this a DIRECTIVE instead of a hint.  A hint implies that you may have a choice about the use of the option specified.  And while NOLOCK does not entirely eliminate locks in the queries, it is an ORDER to the optimizer to use it.  Therefor it is more of a directive than a mere suggestion.

Ghosts – an eXtrasensory Experience

ghostrip_fireThis is the last article in a mini-series diving into the existence of ghosts and how to find them within your database.

So far this has been a fun and rewarding dive into Elysium to see and chat with these entities.  We have unearthed some means to be able to see these things manifesting themselves in the previous articles.  You can take a look at the previous articles here.

For this article, I had planned to discuss another undocumented method to look into the ghost records and their existence based on what was said on an msdn blog.  But after a lot of research, testing and finally reaching out to Paul Randal, I determined that won’t work.  So that idea was flushed all the way to Tartarus.

Let it be made very clear that DBTABLE does not offer a means to see the ghosts.  Paul and I agree that the other article that mentioned DBTABLE really should have been referring to DBCC Page instead.

Despite flushing the idea to Tartarus, it was not a fruitless dive.  It just was meaningless for the purpose of showing ghosts via that DBCC command.  I still gained value from the dive!!

All of that said, the remainder of the plan still applies and it should be fun.

Really, at this point what is there that hasn’t been done about the ghosts?  Well, if you are well tuned to these apparitions, you may have received the urge to explore them with Extended Events – sometimes called XE for short.

As has been done in the past, before we board Charon’s boat to cross the River Styx to Hades to find these ghosts in Elysium, one really needs to run the setup outlined here.

With the framework in place, you are now ready to explore with XE.

Look at that! There are several possible events that could help us track these ghosts.  Or at the least we could get to know how these ghosts are handled deep down in the confines of Hades, err I mean the database engine.

Ghost_XE

 

From these possible events, I opted to work with ghost_cleanup and ghost_cleanup_task_process_pages_for_db_packet.  The sessions I defined to trap our ghost tracks are as follows.

You can see there are two sessions defined for this trip down the Styx.  Each session aptly named for our journey.  The first (GhostHunt) is defined to trap ghost_cleanup and sends that information to a histogram target.  The second (SoulSearch) is defined to use the other event, and is configured to send to the ring_buffer.  Since the second event has a “count” field defined as a part of the event, it will work fine to just send it to the ring buffer for later evaluation.

Once I have the traps, I mean event sessions defined, I can now resume the test harness from the delete step as was previously done in previous articles.  The following Delete is what I will use.

Prior to running that delete though, I checked the Event Session data to confirm a starting baseline.  Prior to the delete, I had the following in my histogram target.

 

predelete_count

 

After running the delete, and checking my histogram again, I see the following results.

post_count

 

You can see from this that in addition to the 25 pre-existing ghosts, we had another 672 ghosts (666 of which were from the delete).

This is how I was able to investigate the GhostHunt Extended Event Histogram.

But what about looking at the other event session?

Let’s look at how we can go and investigate that session first and then look at some of the output data.

ghostclean

 

Cool!  Querying the SoulSearch session has produced some information for various ghosts in the database.  Unlike the histogram session that shows how many ghosts have been cleaned, this session shows us some page ids that could contain some ghosts – in the present.  I can take page 1030111 for instance and examine the page with DBCC PAGE as follows.

 

 

pagealtLook at that page and result!! We have found yet another poltergeist.

RIP

Once again we have been able to journey to the depths of the Database engine and explore the ghosts that might be there.  This just happens to illustrate a possible means to investigate those ghosts.  That said, I would not necessarily run these types of event sessions on a persistent basis.  I would only run these sessions if there seems to be an issue with the Ghost cleanup or if you have a strong penchant to learn (on a sandbox server).

Some good information can be learned.  It can also give a little insight into how much data is being deleted on a routine basis from your database.  As a stretch, you could even possibly use something like this to get a handle on knowing the data you support.  Just be cautious with the configuration of the XE and understand that there could be a negative impact on a very busy server.  And certainly proceed at your own risk.

«page 1 of 3

Calendar
January 2018
M T W T F S S
« Dec    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Tuesday, January 16, 2018