SQL Server User Already Exists – Back to Basics

One of my all-time favorite things in SQL Server is security. No matter what, it always seems that there is a new way to abuse permissions. When people abuse their access level or abuse the way permissions should be set in a SQL Server environment, we get the pleasure of both fixing it and then trying to educate them on why what they did was wrong and how to do it the right way.

In similar fashion, I previously wrote about some fundamental misconceptions about permissions here and here. I have to bring those specific articles up because this latest experience involves the basics discussed in those articles along with a different twist.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Gimme Gimme Gimme…

It is not uncommon to need to create a login and grant that login access to a database (or associate that login to a database user. In fact, that is probably a fairly routine process. It is so routine, that I have a demo script for it right here.

I even went as far as to include some of the very routine mistakes I see happening on a frequent basis (as referenced by a prior post here).

To this point, we only have a mild abuse of how to set permissions for a principal. Now it is time for that twist I mentioned. This user account needs to be created on a secondary server that is participating in either a mirror or an Availability Group. Most people will take that user account that was just created on the first server and then use the same script to add the account to the secondary server. Let’s see how that might look.

For this example, I will not go to the extent of creating the mirror or AG. Rather, I will pretend I am just moving the database to a new server. So I have taken a backup and then I will restore the database to the new server.

Next, let’s go ahead and recreate the login we created on the previous server.

You see here that I am only going to create the login if it does not exist already. Running the script produces the following for me.

Now, let’s deviate a bit and grant permissions for the login just like so many administrators will do.

It seems pretty apparent that my login that I just created does not have access to the GimmeSA database, right? Let’s go ahead and add permissions to the GimmeSA database and see what happens.

Well, that did not work according to plan right? Enter twist the second.

What I am seeing more and more of, is people at this point will just grant that login (that was just created) sysadmin rights. You can pick up your jaw now. Indeed! People are just granting the user SA permissions and calling it good. This practice will certainly work – or appear to work. The fact is, the problem is not fixed. This practice has only camouflaged the problem and it will come back at some future date. That date may be when somebody like me comes along and starts working on stripping non-essential sysadmins from the system.

There are two legitimate fixes for this particular problem (and no granting sysadmin is definitely not one of them). First you can run an orphan fix with a script such as this one by Ted Krueger. That will map the user that already exists in the database to the login principal (thus the reason for the error we saw). Or, you can prep your environment better by using the SID syntax with the create login as follows.

The trick here is to go and lookup the SID for the login on the old server first and then use that sid to create the login on the new server. This will preserve the user to login mappings and prevent the orphan user issue we just saw. It will also prevent the band-aid need of adding the login to the sysadmin server role.

The Wrap

In this article I have introduced you to some basics in regards to creating and synchronizing principals across different servers. Sometimes we try to shortcut the basics and apply band-aids that make absolutely no sense from either a practical point of view or a security point of view. Adhering to better practices will ease your administration burden along with improving your overall security presence.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

SQL Server Configurations – Back to Basics

One thing that SQL Server does very well is come pre-configured in a lot of ways. These pre-configured settings would be called defaults. Having default settings is not a bad thing nor is it necessarily a good thing.

For me, the defaults lie somewhere in the middle ground and they are kind of just there. You see, having defaults can be good for a horde of people. On the other hand, the default settings can be far from optimal for your specific conditions.

The real key with default settings is to understand what they are and how to get to them. This article is going to go through some of the basics around one group of these defaults. That group of settings will be accessible via the sp_configure system stored procedure. You may already know some of these basics, and that is ok.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Some Assembly Required…

Three dreaded words we all love to despise but have learned to deal with over the past several years – some assembly required. More and more we find ourselves needing to assemble our own furniture, bookcases, barbecue grills, and bathroom sinks. We do occasionally want some form of set and forget it.

The problem with set it and forget it type of settings (or defaults) is as I mentioned – they don’t always work for every environment. We do occasionally need to manually adjust settings for what is optimal for that database, server, and/or environment.

When we fail to reconfigure the defaults, we could end up with a constant firefight that we just don’t ever seem to be able to win.

So how do we find some of these settings that can help us customize our environment for the better (or worse)? Let’s start taking a crack at this cool procedure called sp_configure! Ok, so maybe I oversold that a bit – but there is some coolness to it.

Looking at msdn about sp_configure I can see that it is a procedure to display or change global configuration settings for the current server.

If I run sp_configure without any parameters, I will get a complete result set of the configurable options via this procedure. Let’s look at how easy that is:

Ok, so that was exceptionally easy. I can see that the procedure returns the configurable settings, the max value for the setting, configured value, and the running value for each setting. That is basic information, right? If I want a little more detailed information, guess what? I can query a catalog view to learn even more about the configurations – sys.configurations.

That query will also show me (in addition to what I already know from sp_configure) a description for each setting, if the setting is a dynamic setting and whether or not the setting is an advanced configuration (and thus requires “show advanced options” to be enabled). Pro-tip: The procedure just queries the catalog view anyway. Here is a snippet from the proc text.

Seeing that we have some configurations that are advanced and there is this option called “show advanced options”, let’s play a little bit with how to enable or disable that setting.

With the result (on my system) being:

We can see there that the configuration had no effect because I already had the setting enabled. Nonetheless, the attempt to change still succeeded. Let’s try it a different way.

I ran a whole bunch of variations there for giggles. Notice how I continue to try different words or length of words until it finally errors? All of them have the same net effect (except the attempt that failed) they will change the configuration “show advanced options”. This is because all that is required (as portrayed in the failure message) is that the term provided is enough to make it unique. The uniqueness requirement (shortcut) is illustrated by this code block from sp_configure.

See the use of the wildcards and the “like” term? This is allowing us to shortcut the configuration name – as long as we use a unique term. If I select a term that is not unique, then the proc will output every configuration option that matches the term I used. From the example I used, I would get this output as duplicates to the term I used.

Ah, I can see the option I need! I can now just copy and paste that option (for the sake of simplicity) into my query and just proceed along my merry way. This is a great shortcut if you can’t remember the exact full config name or if you happen to be really bad at spelling.

The Wrap

In this article I have introduced you to some basics in regards to default server settings and how to quickly see or change those settings. Not every environment is able to rely on set-it and forget-it type of defaults. Adopting the mentality that “some assembly is required” with your environments is a good approach. It will help keep you on top of your configurations at the bare minimum. This article will help serve a decent foundation for some near future articles. Stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Common Tempdb Trace Flags – Back to Basics

Once in a while I come across something that sounds fun or interesting and decide to dive a little deeper into it. That happened to me recently and caused me to preempt my scheduled post and work on writing up something entirely different. Why? Because this seemed like fun and useful.

So what is it I am yammering on about that was fun?

I think we can probably concede that there are some best practices flying around in regards to the configuration of tempdb. One of those best practices is in regards to two trace flags within SQL Server. These trace flags are 1117 and 1118. Here is a little bit of background on the trace flags and what they do.

A caveat I have now for the use of trace flags is that I err on the same side as Kendra (author of the article just mentioned). I don’t generally like to enable trace flags unless it is very warranted for a very specific condition. As Kendra mentions, TF 1117 will impact more than just the tempdb data files. So use that one with caution.

Ancient Artifacts

With the release of SQL Server 2016, these trace flags were rumored to be a thing of the past and hence completely unnecessary. That is partially true. The trace flag is unneeded and SQL 2016 does have some different behaviors, but does that mean you have to do nothing to get the benefits of these Trace Flags as implemented in 2016?

As it turns out, these trace flags no longer do what they did in previous editions. SQL Server now pretty much has it baked into the product. Buuuuut, do you have to do anything slightly different to make it work? This was something I came across while reading this post and wanted to double check everything. After all, I was also under the belief that it was automatically enabled. So let’s create a script that checks these things for me.

Holy cannoli batman – that is more than a simple script, right? Well, it may be a bit of overkill. I wanted it to work for version before and after and including SQL Server 2016 (when these sweeping changes went into effect). You see, I am checking for versions where the TF was required to make the change and also for versions after the change where the TF has no effect. In 2016 and later, these settings are database scoped and the TF is unnecessary.

The database scoped settings can actually be queried in 2016 more specifically with the following query.

In this query, I am able to determine if mixed_page_allocations and if is_autogrow_all_files are enabled. These settings can be retrieved from sys.databases and sys.filegroups respectively. If I run this query on a server where the defaults were accepted during the install, I would see something like the following.

You can see here, the default settings on my install show something different than the reported behavior. While autogrow all files is enabled, mixed_page_allocations is disabled. This matches what we expect to see by enabling the Trace Flags 1117 and 1118 – for the tempdb database at least. If I look at a user database, I will find that mixed pages is disabled by default still but that autogrow_all_files is also disabled.

In this case, you may or may not want a user database to have all data files grow at the same time. That is a great change to have implemented in SQL Server with SQL 2016. Should you choose to enable it, you can do so on a database by database basis.

As for the trace flags? My query checks to see if maybe you enabled them on your instance or if you don’t have them enabled for the older versions of SQL Server. Then the script generates the appropriate action scripts and allows you to determine if you want to run the generated script or not. And since we are changing trace flags (potentially) I recommend that you also look at this article of mine that discusses how to audit the changing of trace flags. And since that is an XEvent based article, I recommend freshening up on XEvents with this series too!

The Wrap

In this article I have introduced you to some basics in regards to default behaviors and settings in tempdb along with some best practices. It is advisable to investigate from time to time some of these recommendations and confirm what we are really being told so we can avoid confusion and mis-interpretation.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Changing Default Logs Directory – Back to Basics

Every now and then I find a topic that seems to fit perfectly into the mold of the theme of “Back to Basics”. A couple of years ago, there was a challenge to write a series of posts about basic concepts. Some of my articles in that series can be found here.

Today, my topic to discuss is in regards to altering the default logs directory location. Some may say this is no big deal and you can just use the default location used during install. Fair enough, there may not be massive need to change that location.

Maybe, just maybe, there is an overarching need to change this default. Maybe you have multiple versions of SQL Server in the enterprise and just want a consistent folder to access across all servers so you don’t have to think too much. Or possibly, you want to copy the logs from multiple servers to a common location on a central server and don’t want to have to code for a different directory on each server.

The list of reasons can go on and I am certain I would not be able to list all of the really good reasons to change this particular default. Suffice it to say, there are some really good requirements out there (and probably some really bad ones too) that mandate the changing of the default logs directory to a new standardized location.


The logs that I am referring to are not the transaction logs for the databases – oh no no no! Rather, I am referring to the error logs, the mini dumps, and the many other logs that may fall into the traditional “logs” folder during the SQL Server install. Let’s take a peek at a default log directory after the install is complete.

I picked a demo server that has a crap load of stuff available (and yeah not so fresh after install) but where the installation placed the logs by default. You can see I have traces, default XE files, some SQL logs, and some dump files. There is plenty going on with this server. A very fresh install would have similar files but not quite as many.

If I want to change the Log directory, it is a pretty easy change but it does require a service restart.

In SQL Server Configuration Manager, navigate to services then to “SQL Server Service”. Right click that service and select properties. From properties, you will need to select the “Startup Parameters” tab. Select the parameter with the “-e” and errorlog in the path. Then you can modify the path to something more appropriate for your needs and then simply click the update button. After doing that, click the ok button and bounce the SQL Service.

After you successfully bounce the service, you can confirm that the error logs have been migrated to the correct folder with a simple check. Note that this change impacts the errorlogs, the default Extended Events logging directory, the default trace directory, the dumps directory and many other things.

See how easy that was? Did that move everything over for us? As it turns out, it did not. The old directory will continue to have the SQL Agent logs. We can see this with a check from the Agent log properties like the following.

To change this, I can execute a simple stored procedure in the msdb database and then bounce the sql agent service.

With the agent logs now writing to the directory verified after agent service restart as shown here.

At this point, all that will be left in the previous folder will be the files that were written prior to the folder changes and the service restarts.

The Wrap

In this article I have introduced you to an easy method to move the logs for SQL Server and the SQL Server Agent to a custom directory that better suits your enterprise needs. This concept is a basic building block for some upcoming articles – stay tuned!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

12 Days Of Christmas and SQL

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 26, 2017

One of my all-time favorite times of the year happens to be the Christmas Season. I enjoy the season because it is supposed to remind us to try and be better people. And for me, it does help. In all honesty, it should be a better effort year round, but this is a good time of year to try and get back on track and to try and focus more on other more important things.

For me, one of the more important things is to try and help others. Focusing on other people and their needs helps them but also helps one’s self. It is because of the focus on others that I enjoy, not just Christmas Day, but also the 12 Days of Christmas.

The 12 Days of Christmas is about giving for 12 Days. Though, in this day and age, most view it as a span of 12 Days in which they are entitled to receive gifts. If we are giving for a mere 12 Days and not focusing on receiving, then wouldn’t we all be just a little bit happier? I know that when I focus more on the giving I am certainly happier.


In the spirit of the 12 Days of Christmas and Giving, I have a 12 Day series that I generally try to do each Holiday Season. The series will generally begin on Christmas day to align with the actual 12 Days of Christmas (rather than the adopted tradition of ending on Christmas). This also means that the series will generally end on the celebration of “Twelfth Night” which is January 5th.

Each annual series will include several articles about SQL Server and have a higher goal of trying to learn something more about SQL Server. Some articles may be deep technical dives, while others may prove to be more utilitarian with a script or some functionality that can be quickly put to use and frequently used. Other articles may just be for fun. In all, there will be several articles which I hope will bring some level of use for those that read while they strive to become better at this thing called SQL Server.

This page will serve as a landing page for each of the annual series and will be updated as new articles are added.


  1. XE Permissions – 25 December 2017
  2. Best New(ish) SSMS Feature – 26 December 2017
  3. XE System Messages – 27 December 2017
  4. Correlate Trace and XE Events – 28 December 2017
  5. Audit Domain Group and User Permissions – 29 December 2017
  6. An Introduction to Templates – 30 December 2017
  7. Failed to Create the Audit File – 31 December 2017
  8. Correlate SQL Trace and Actions – 1 January 2018
  9. Dynamics AX Event Session – 2 January 2018
  10. Sharepoint Diagnostics and XE – 3 January 2018
  11. Change Default Logs Directory – 4 January 2018
  12. Common Tempdb Trace Flags – Back to Basics (Day of Feast) – 5 January 2018


  1. Failed – 25 December 2015
  2. Failed – 26 December 2015
  3. Failed – 27 December 2015
  4. Failed – 28 December 2015
  5. Failed – 29 December 2015
  6. Log Files from Different Source – 30 December 2015
  7. Customize XEvent Log Display – 31 December 2015
  8. Filtering Logged Data – 1 January 2016
  9. Hidden GUI Gems – 2 January 2016
  10. Failed – 3 January 2016
  11. Failed – 4 January 2016
  12. A Day in the Stream – 5 January 2016


  1. Las Vegas Invite – 25 December 2013
  2. SAN Outage – 26 December 2013
  3. Peer to Peer Replication – 27 December 2013
  4. Broken Broker – 28 December 2013
  5. Peer Identity – 29 December 2013
  6. Lost in Space – 30 December 2013
  7. Command N Conquer – 31 December 2013
  8. Ring in the New Year – 1 January 2014
  9. Queries Going Boom – 2 January 2014
  10. Retention of XE Session Data in a Table – 3 January 2014
  11. Purging syspolicy – 4 January 2014
  12. High CPU and Bloat in Distribution – 5 January 2014

2012 (pre-Christmas)

  1. Maint Plan Logs – 13 December 2012
  2. Service Broker Out of Control – 14 December 2012
  3. Backup, Job and Mail History Cleanup – 15 December 2012
  4. Exercise for msdb – 16 December 2012
  5. Table Compression – 17 December 2012
  6. Maintenance Plan Gravage – 18 December 2012
  7. Runaway Jobs – 19 December 2012
  8. SSRS Schedules – 20 December 2012
  9. Death and Destruction, err Deadlocks – 21 December 2012
  10. Virtual Storage – 22 December 2012
  11. Domain Setup – 23 December 2012
  12. SQL Cluster on Virtual Box – 24 December 2012

Seattle SQL Pro Workshop 2017 Schedule

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 26, 2017

db_resuscitateSeattle SQL Pro Workshop 2017

You may be aware of an event that some friends and I are putting together during the week of PASS Summit 2017. I have created an Eventbrite page with all the gory details here.

With everybody being in a mad scramble to get things done to pull this together, the one task we left for last was to publish a schedule. While this is coming up very late in the game, rest assured we are not foregoing some semblance of order for the day. 😉 That said, there will still be plenty of disorder / fun to be had during the day.

So the entire point of this post is to publish the schedule and have a landing page for it during the event. *

Session Start Duration Presenter Topic
Registration 8:30 AM All
Intro/Welcome 9:00 AM 10 Jason Brimhall  
1 9:10 AM 60 Jason Brimhall Dolly, Footprints and a Dash of EXtra TimE
Break 10:10 AM 5    
2 10:15 AM 60 Jimmy May Intro to Monitoring I/O: The Counters That Count
Break 11:15 AM 5    
3 11:20 AM 60 Gail Shaw Parameter sniffing and other cases of the confused optimiser
Lunch 12:20 PM 60   Networking /  RG
4 1:20 PM 60 Louis Davidson Implementing a Hierarchy in SQL Server
Break 2:20 PM 5    
5 2:25 PM 60 Andy Leonard Designing an SSIS Framework
Break 3:25 PM 5    
6 3:30 PM 60 Wayne Sheffield What is this “SQL Inj/stuff/ection”, and how does it affect me?
Wrap 4:30 PM 30   Swag and Thank You
END 5:00 PM Cleanup

*This schedule is subject to change without notice.

Seattle SQL Pro Workshop 2017

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 19, 2017

Seattle SQL Pro Workshop 2017

October is a great time of year for the SQL Server and Data professional. There are several conferences but the biggest happens to be in the Emerald City – Seattledb_resuscitate

Some friends and I have come together the past few years to put on an extra day of learning leading up to this massive conference. We call it the Seattle SQL Pro Workshop. I have created an Eventbrite page with all the gory details here.

That massive conference I have mentioned – you might have heard of it as well. It is called PASS Summit and you can find out a wealth of info from the website. Granted there are plenty of paid precon events sanctioned by PASS, we by no means are competing against them. We are trying to supplement the training and offer an extra avenue to any who could not attend the paid precons or who may be in town for only part of the day on Tuesday.

This year, we have a collision of sorts with this event. We are holding the event on Halloween – Oct 31, 2017. With it being Halloween, we welcome any who wish to attend the workshop in FULL costume.

So, what kinds of things will we cover at the event? I am glad you asked. Jimmy May will be there to talk about IO. Gail Shaw will be talking about the Query Optimizer (QO). Louis (Dr. SQL) will be taking us deep into Hierarchies. Andy Leonard will be exploring BIML and Wayne Sheffield will be showing us some SQL Injection attacks.

That is the 35,000 foot view of the sessions. You can read more about them from the EventBrite listing – HERE. What I do not yet have up on the is what I will be discussing.

My topic for the workshop will be hopefully something as useful and informative as the cool stuff everybody else is putting together. I will be sharing some insights about a tool from our friends over at Red-Gate that can help to change the face of the landscape in your development environments. This tool as illustrated so nicely by my Trojan Sheep, is called SQL Clone.

I will demonstrate the use of this tool to reduce the storage footprint required in Dev, Test, Stage, QA, UAT, etc etc etc. Based on client case study involving a 2TB database, we will see how this tool can help shrink that footprint to just under 2{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170} – give or take. I will share some discoveries I met along the way and I even hope to show some internals from the SQL Server perspective when using this technology (can somebody say Extended Events to the Rescue?).

Why Attend?

Beyond getting some first rate training from some really awesome community driven types of data professionals, this is a prime opportunity to network with the same top notch individuals. These people are more than MVPs. They are truly technical giants in the data community.

This event gives you an opportunity to learn great stuff while at the same time you will have the chance to network on a more personal level with many peers and professionals. You will also have the opportunity to possibly solve some of your toughest work or career related problems. Believe me, the day spent with this group will be well worth your time and money!

Did I mention that the event is Free (with an optional paid lunch)?

Seattle SQL Pro Workshop 2016

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 23, 2016

db_resuscitateSeattle SQL Pro Workshop 2016

You may be aware of an event that some friends and I are putting together during the week

of PASS Summit 2016. I have listed the event details within the EventBrite page here.

As we near the actual event, I really need to get the schedule published (epic fail in getting it out sooner).

So the entire point of this post is to publish the schedule and have a landing page for it during the event.

Session Start Duration Presenter Topic
Registration 8:30 AM All
Intro/Welcome 9:00 AM 10 Jason Brimhall  
1 9:10 AM 60 Grant Fritchey Azure with RG Data Platform Studio
Break 10:10 AM 5    
2 10:15 AM 60 Tjay Belt PowerBI from a DBA
Break 11:15 AM 5    
3 11:20 AM 60 Wayne Sheffield SQL 2016 and Temporal Data
Lunch 12:20 PM 60   Networking /  RG
4 1:20 PM 60 Chad Crawford Impact Analysis – DB Change Impact of that Change
Break 2:20 PM 5    
5 2:25 PM 60 Gail Shaw Why are we Waiting?
Break 3:25 PM 5    
6 3:30 PM 60 Jason Brimhall XEvent Lessons Learned from the Day
Wrap 4:30 PM 30   Swag and Thank You
END 5:00 PM Cleanup

You Deserve to be an MVP

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 25, 2016

I have been sitting on this article for a while now. I have been tossing around some Microsoft_MVP_logo_thumb
thoughts and finally it is time to share some of those thoughts with the masses. I hope to provoke further thought on the topic of being an MVP.

I want to preface all of these thoughts first by saying that I believe there are many great people out there who are not an MVP who deserve to be an MVP. These are the types of people that do a lot for the community and strive to bring training and increased knowledge to more people in various platforms under the Microsoft banner.

Now for some obligatory information. While it is true I am an MVP, I feel obligated to remind people that I have zero (yup that is a big fat zero) influence over the MVP program. I am extremely grateful for the opportunity to retain the position of MVP along with all of the rest of the MVP community (there are a few of us out there). Not only am I grateful to the program for allowing me in, I am also grateful to all of those that nominated me.

Work – and lots of it!


One of the first things that strikes me is the nomination process for the MVP program. There are two parts to the process. The easy part comes from the person making the nomination. That said, if you are nominating somebody or if you are asking somebody to nominate you, read this guide from Jen Stirrup. Jen has listed a bunch of work that has to be done on the part of nominator. Or is it work for the person making the nomination?

When you really start thinking about it, the nominee is really the person that needs to do a fair amount of work. Yes, it is a good amount of work to do. Then again, maybe it is not very much work for you at all.

One of the things that really bugs me about the process is all of this work. Not specifically that I get the opportunity to do it. No, more specifically that there seems to be a growing trend in the community of entitlement. I feel that far too many people, that do a lot within the community, feel they are entitled to being accepted into the MVP program. And of course there are others that do much less and also exhibit the same sentiment.


When you feel you deserve to be an MVP, are you prepared to do more work? I have heard from more than one source that they will not fill out all the extra information requested when they are nominated. The prevailing reason here being that they are entitled, because they do some bit of community work, to be automatically included. Another prevailing sentiment, around this extra work, is that Microsoft should already be tracking the individual and know everything there is to know about the contributions of said individual.

These sentiments couldn’t be further from the fact. If you are thinking along the lines of either of these sentiments, you are NOT an MVP. There are a ton of professionals in the world doing a lot of community activities who are just as deserving of becoming an MVP. It long_resumeis hardly plausible for Microsoft to track every candidate in the world. Why not tell them a bit about yourself?


When applying for a job, how do you go about applying for that job? Every job for which I have ever applied, I have needed to fill out an application as well as send a resume to the employer. I hardly think any employer would hire me without knowing that I am interested in the job.

That sounds fantastic for a job right? Being an MVP surely has no need to send a resume for that, is there? Well, technically no. However, if you treat your community work like you would treat any other experience you have, you may start to see the need for the resume just a touch more. When nominated, you are requested to provide a lot of information to Microsoft that essentially builds your resume to be reviewed for the MVP program.

One of the prevailing sentiments I have heard from more than one place is that filling out all of this information is just bragging on yourself. That sentiment is not too far from reality. Just like any resume, you have to highlight your experiences, your accomplishments and your skills. Without this kind of information, how could Microsoft possibly know anything about you? Do you have the paparazzi following you and sending the information along to Microsoft for you? If you do, then why even bother with the MVP program? Your popularity is probably on a bigger scale than the MVP program if you have your own paparazzi.

Invest in your Professional Self

resume_wordcloudThe more effort you put into your candidate details the better chance you have at standing out within the review process. Think about it this way, would you turn in a piece of paper with just your name on it for a job? Or…would you take hours to invest in your personal self and produce a good resume that will stand out in the sea of resumes that have been submitted?

If you ask me to submit you as an MVP and I do, I would hope that you complete your MVP resume (candidate profile) and submit it to Microsoft. If you don’t take the time to do that, then I would find it hard to ever submit you again. The refusal to fill out that information speaks volumes to me and says either you are not interested or think too much of yourself for the MVP program.


One of the attributes of an MVP is that of leadership. A simple measure of leadership actually falls into the previous two sections we just covered. If you are contributing to the community, that would be one small form of leadership. If you are willing to follow, that is also a form of leadership. If you are able to complete your information and submit it, then that is also an attribute of leadership.

Leaders demonstrate their leadership by being able to take direction, teaching others (community work), completing tasks when necessary, and reporting back up to their superiors on successes and failures (the last two can be attached to the completion of the nomination data).

Don’t believe me about leadership being an attribute of an MVP? Take a gander at this snippet from my last renewal letter. Highlighted in red is the pertinent sentence.


You can run the phrase through a translator or take my word for it that it pertains to exceptional leaders in the technical community.

It’s not a Job though

I am sure some of the pundits out there would be clamoring that if the MVP program were an actual job, then they would perform all of the extra work. I have two observations for this: 1) it speaks to the persons character and 2) MVP really is more like a job than you may think.

The MVP program is not a paid job and probably falls more into the realm of volunteering back2workthan a paid job. Despite that, if you treat it more like a job with full on responsibilities you will have greater success in getting accepted and you will have a greater sense of fulfillment. Additionally, you will get further along with more opportunities within the MVP program just like a traditional job.

Just like a traditional job, there are responsibilities, non-disclosures, internal communications, and annual reviews. Did any of those terms raise your eyebrow? The community contribution paperwork does not end with becoming an MVP – that is just the job application / resume. Every year, you have to provide an annual review. This review is a recap of the entire year with your personal accomplishments and is basically a self-review that would be provided to the manager. I am sure you are familiar with the process of providing a self-review to document reasons why you should remain employed or even get a raise.

Non-traditional Job

As with a regular job, you must continue to accomplish something in order to maintain the position. The accomplishments can come in any form of community contribution such as blogs, speaking, mentoring, or podcasts (as examples). What is not often realized is that this takes time. Sometimes, it takes a lot of time. When you consider the time as a part of your effort, I hope you start to realize that being an MVP really is a lot like a part time job (and a full time job in some cases).

When we start talking about being an MVP in quantity of hours contributed and tasks accomplished, it is not hard to see it as a job. So if it really is just like a job, how much time are you willing to invest in the documentation for this award? Is it at least comparable to the time you invest in documenting your professional career when applying for a paying job? If you don’t take that kind of pride or effort in documenting your worth to your personal career development, then I dare say you need to rethink your approach and re-evaluate whether you should be an MVP candidate.

Being an MVP is not just an award – it is a commitment to do more for the community!

Adventures with NOLOCK

Categories: News, Professional, SSC
Comments: 4 Comments
Published on: June 15, 2015

Some of the beauty of being a database professional is the opportunity to deal with our friend NOLOCK.  For one reason or another this query directive (yes I am calling it a directive* and not a hint) is loved and idolized by vendors, applications, developers, and upper management alike.  The reasons for this vary from one place to the next, as I have found, but it always seems to boil down to the perception that it runs faster.

And yes, queries do sometimes run faster with this directive.  That is until they are found to be the head blocker or that they don’t run any faster because you can write good TSQL.  But we are not going to dive into those issues at this time.

A gem that I recently encountered with NOLOCK was rather welcome.  Not because of the inherent behavior or anomalies that can occur through the use of NOLOCK, but rather because of the discovery made while evaluating an execution plan.  Working with Microsoft SQL Server 2012 (SP1) – 11.0.3128.0 (X64) , I came across something that I would rather see more consistently.  Let’s take a look at this example execution plan:



First is a look at the plan to see if you can see what I saw.

Read Uncommitted


And now, we can see it clear as day.  In this particular case, SQL Server decided to remind us that the use of this directive allows uncommitted reads to occur so it throws that directive into the query text of the execution plan as well.  This is awesome!  In short, it is a visual reminder that the use of the NOLOCK directive, while it may be fast at times, is a direct route to potentially bad results by telling the database engine to also read uncommitted data.

How cool is that?  Sadly, I could only reproduce it on this one version of SQL Server so far.  If you can reproduce that type of behavior, please share by posting to the comments which version and what you did.  For me, database settings and server settings had no effect on this behavior.  No trace flags were in use, so no effect there either.  One thing of note, in my testing, this did not happen when querying against a table direct but did only occur when querying against a view (complexity and settings produced no difference in effect for me).

* I would like to make it a goal for every database professional to call this a DIRECTIVE instead of a hint.  A hint implies that you may have a choice about the use of the option specified.  And while NOLOCK does not entirely eliminate locks in the queries, it is an ORDER to the optimizer to use it.  Therefor it is more of a directive than a mere suggestion.

«page 1 of 3

March 2018
« Jan    

Welcome , today is Monday, March 19, 2018