What is the Merit of the Job?

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 29, 2019

For the past several months I have been pondering over the topic of meritocracy. Meritocracy seems to be a hot button topic that goes between hot and cold cycles in various circles and climates for whatever reason. There seem to be quite a few political fires that can be easily stoked with this topic. For me, I haven’t been thinking of it from a political standpoint. For me, I have been pondering how the rat race of meritocracy has affected me unwittingly.

What is meritocracy? Well, there are a great many ways to describe this idea, but let’s just stick to the base definition.

Meritocracy: a form of social system in which power goes to those with superior intellects or ability.

I have no doubt that there is an absolute need for meritocracy in certain situations. For example, as a sports coach, you want your best athletes on the field in order to compete. The best athletes are determined through a series of assessments and monitored efforts and performances. They have earned their way into that position based on merit. Let’s be honest here, as spectators we also want to watch the best athletes and not the scrubs else we wouldn’t pay money to attend. It is a disservice in certain situations to not award those with the greatest ability the merit of more playing time in an athletic competition.

Serious Flaw

When we start employing a merit system to our personal lives or to our professional goals, things can get a little dicey. One of the major attributes of a merit based system is extreme competition. I view competition as a good thing. It keeps me on my toes and progressing personally. That said, there is a hidden undesirable impact that comes from extreme competition – and that impact affects interpersonal relationships.

As IT professionals (and the running joke even in IT is that DBAs are far worse than everybody else) we are far too often introverted. We are internally driven to achieve certain levels of success. Along with that internal drive, we far too often focus on the merits of the job that affect our career status and seldom pay adequate attention to those merits that affect our personal relationships and feelings as human beings.

You see, meritocracy is a rat race. In a meritocracy we are driven for results and in business the results frequently come at the cost of human interactions and relationships. How well do you know the people around you? Have you taken much effort to try and establish a rapport with your co-workers or clients – beyond the ones that sign the checks?

This is the serious flaw in a merit based system. One tries to climb the ladder of success but at the cost of being an approachable human being. Nice people don’t need to finish last. Maybe your merit review should have a line item about building relationships of trust and determine how well you have succeeded at that requirement (did you exceed expectations?)!

Put a bow on it

Merit based systems certainly have their place in business and in life. How well we govern our personal interaction with the merit based system and how we conduct our personal growth should matter more than most other achievements in life. Interactions with people is crucial to our growth as a professional and as a human being.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the fifth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Where is that Mask?

Comments: 1 Comment
Published on: December 28, 2019

It is always a pleasantly alarming situation when all of a sudden a client calls because an automated routine is broken. No matter what the routine is, it’s not the best of news from a client – especially not if it is a routine you helped them migrate to a new server.

I am sure you can hear the klaxon like sound in your head mimicking the panic and shrill anxiety of a department head saying “it has always worked in the past!”

Not the best of news to receive, but also not the least bit cause of panic for you. You remain calm, cool, and collected as you get ready to start troubleshooting the problem.

During a recent client engagement, I had the pleasure of helping them solve a similar problem. Shortly after a server migration and after having proven the process to be properly working, the automated routine started doing something entirely different and new.

It’s Hidden

You see, what once was working suddenly started masking data for a critical component of the process. Due to this change in the data, bank accounts were no longer properly syncing and it was causing a problem with the accounting department who could no longer balance the books. Luckily, the issue was found within a day and the resolution started promptly else it could have been a nightmare.

First order of business, after discovering that the process was indeed running as expected and it was just a change of data, was to figure out what had changed. Was there a code change maybe? Maybe somebody with too many permissions decided to enable a feature on the database side?

We went through and validated everything we could think of. There was no code changes in the related procs, views, or otherwise. Then we went through and validated any sort of database changes such as the enabling of encryption for this key field. There was no encryption or any sort of database feature enabled, but the behavior was eerily similar to some sort of data masking feature having been enabled.

I set out to confirm if data masking was enabled to any extent within the database. The easy way to do something like that is through the use of the following script.

In this particular case, the result of the query was an empty set. There was nothing on the database side enabled that could have caused this sudden change in data behavior. That said, at least we were able to truly confirm that data-masking was not enabled within the database (or any database on the server for that matter.

As it turned out, somebody from the finance team enabled the masking feature from within the application. Since it was entirely application driven, the data was entirely replaced with a masked version of the data so the critical data was overwritten and could not be un-masked. Due to this, a new revision to the process had to be created on the fly to get them back in business.

Put a bow on it

There may be times that data can be changed which will have negative effects on down stream automated processes. When these changes occur, how equipped are we to figure out the root cause? Today, I shared one easy script that could help eliminate one database feature as a culprit to an anomalous data masking suddenly getting enabled. The more tools you have at your fingertips, the better data professional you can be. Add this one to your tool belt!

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the fourth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Easy Audit Index Changes

Comments: No Comments
Published on: December 27, 2019

In the previous article, I showed a somewhat elaborate method that could be used should there be a need to audit changes to your indexes.

As I learned many years ago in school, the elaborate or difficult method may not always be the best method. That said, we often need to learn the more elaborate methods in order to better appreciate the simpler, more efficient methods available. This article will help demonstrate a much better method.

Auditing

When looking for an easy method, one of the first technologies to try really should be Extended Events (xevents). Through the use of xevents, we have a large tool set at our disposal. Not only is the tool set rather large, it is also an extremely flexible multi-faceted tool set giving us greater potential to solve problems with simple tools.

Let’s look at how we can solve this problem through the use of xevents.

In the preceding xevent session, I am taking advantage of three events: object_created, object_altered, and object_deleted. These events will only trigger if an index definition is changed, added, or deleted. They will not trigger if the index is defragged (for instance) or the stats for the index happen to be updated (which is an issue for the stats_date method shown in this article). After deploying this session and then creating an index (shown in the following script), I can expect this session to capture a wealth of information.

In the preceding image, we can see that plenty of pertinent information has been captured. The image does not show the person who made the change, but that information is also captured with this session. Should I wish to focus this session to a specific database, that is definitely possible by removing the comment tags from the database_name predicate in the session script I have shared.

Put a bow on it

This article showed an easy method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the third article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Audit Index Changes

Categories: News, Professional, SSC
Comments: 3 Comments
Published on: December 26, 2019

In the previous article, I showed a quick and dirty method to try and capture the indexes that may have been just created. We all know that in the hectic life of the DBA, it is easy enough to forget the indexes created more than just a few hours in the past.

In that article, I mentioned that the solution provided was far from a complete solution to the entire problem. A bigger part of the picture involves audits to know when and by whom indexes were created, altered or dropped.

Auditing

On many an occasion I have had clients request help with tracking the changes that have occurred to their indexes. The reasons vary slightly but generally boil down to two large categories: 1) to better understand who is making changes and 2) to help ensure proper indexes are not removed due to code promotions (various apps like to wipe out custom indexes which causes performance issues).

With that in mind, the following provides a solution that will help track indexes that have changed (index definition) or that have been added or removed during the audit interval. The interval can be set per your specific requirements. For this specific solution, I have historically just set this to run once a day. The solution is shown below.

To make this solution work, I use several tables and a stored procedure. The stored procedure is scheduled via an agent job and scheduled to your requirements. The tables will stage all of the indexes for all of the databases, then log which have changed or have been added or removed. From there, a simple query to the IndexDefChange table will help me learn of which indexes may deserve some attention.

Put a bow on it

This article showed a method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the second article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Find Index Create Date

Categories: News, Professional, SSC
Comments: 1 Comment
Published on: December 25, 2019

Have you ever been zipping right along working on tuning a query or maybe a handful of queries. Then BAM! All of a sudden you lose the script you were working on. The script just so happened to have a few indexes that you were getting ready to create and you would like to not have to start all over again.

Sure, there are tools out there in this day and age that are supposed to help you recover those scripts. Just suppose that maybe the tool(s) don’t have a record of your script either. Are you up a creek without a paddle at this point? Are you truly forced to start all over again?

Let’s make this predicament even more unfortunate for the DBA. The DBA doesn’t recall the object upon which the indexes were to be created or that maybe upon which the indexes were already created (but the DBA needs to get those indexes to three or four more servers).

A bit Far Fetched?

Does this all seem a bit too implausible or do you know somebody that has experienced something like this? I had a DBA pose this eerily similar scenario to me because it had just happened to him. Let’s be real, crap really does happen at the most inopportune time. We just need to be prepared to deal with it and polish it in the most elegant way we know.

Today, I will show you how to potentially retrieve the indexes that were recently created and maybe, just maybe, save yourself a little time and prevent the dreaded rework. Understand, however, that this is not a bullet proof solution. It is merely a plausible method to potentially recapture the indexes recently created. I will explain in more detail shortly the issues with this method. First the script.

If I run that query on a database, I might see information such as the following shows.

Interestingly enough, all of these indexes were certainly not created on the dates shown in the LastModDate column. As it turns out, the trick I am using (Stats_Date) will give us relative dates of indexes being created but it also gives us the date of the last time the index was updated (or stat was updated). This means you may end up with a ton of indexes to try and sift through to find that index you just created (and forgot the pertinent information). So, while this method does have a use case and can help you prevent rework, it doesn’t do much good for an audit requirement. Stay tuned because the next couple of articles will discuss how to better audit your indexes and get the precise information you need. Not only will you be able to find that recent index addition, but you will also be able to assuredly know when an index was created, altered, or dropped – and by whom.

Put a bow on it

This article showed a quick script to help determine indexes that were created recently. This script will help you out of rough spot and help reduce the chance of rework. That said, you will definitely want to add a bit of auditing on indexes to your environment because while it is helpful, it will not help you in cases where maybe the index was created a few days in the past or to determine when an index may have been dropped.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the first article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Ad Hoc Queries Disabled

DBAs and ad hoc queries sometimes go together like oil and water. While we may prefer to avoid the ad hoc queries, sometimes it is the best method to achieve a task.

What exactly is an ad hoc query though? An ad hoc query is a query that should serve a single use purpose (not always the case), is routinely unplanned and quite possibly was untested (again, not always the case). Sometimes, however, the definitions of ad hoc don’t fully apply to queries in SQL Server. One such case is through the use of dynamic type queries and queries issued to linked servers via openrowset.

In my experience, queries used against a linked server and using the openrowset functionality is typically more of a prepared statement. However, it is treated as an ad hoc query. As such, you may encounter an error you may not have been expecting.

Msg 15281, Level 16, State 1, Line 161
SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, search for ‘Ad Hoc Distributed Queries’ in SQL Server Books Online.

This article will help show how to resolve for this error so your queries may go on as intended.

Ad Hoc

Let’s first take a look at a pretty typical type of query I use routinely to check various things on my lab servers.

This is a query that I use (or something like it) to retrieve various Extended Events data from different lab servers. Notice, I use the openrowset method in order to query my linked server. If I do not have the server configured for ad hoc distributed queries then I will receive the error previously noted. The solution for that error is to enable the setting. We can do that via the following query.

Once enabled, then the openrowset queries will work across linked servers. This needs to be enabled on the server that is local in order to send the ad hoc query across to the remote server. With the setting now enabled, running the query that threw the error (from above) now returns the follow results.

The Wrap

This article took a look at an error that may occur depending on your use of linked servers and the use of openrowset. Documentation states that this is something that should be done infrequently and alludes to the issue being resolved through the use of linked servers. In my case, this crops up when using openrowset to query my linked servers. I find that using openrowset to query the linked server is far more reliable, efficient, and better performing. YMMV.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Configuration Manager is Corrupt

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: December 17, 2019

The longer one works in the IT industry, the more likely it becomes to experience a broken system in some capacity. One minute it is working and the next it is not. This is true even of systems that are properly maintained and secured. Stuff happens and pieces break here and there.

I had the distinct pleasure of experiencing this exact problem. One of the essential tools in the DBA tool chest is the SQL Server Configuration Manager. Usually everything works fine here – until it doesn’t. This is a critical component to a well configured server. If nothing else, a DBA should be well enough versed to know that services for SQL Server should be bounced and configured from here. If you were unaware of that requirement – you now know.

Lesser known is the fact that this tool uses WMI to view or change various server settings. WMi allows for a uniform method for the interaction with these services and configurations. Almost all of the time, that is a seamless interaction and the DBA is none the wiser. And then it doesn’t work and the DBA grows more grey hairs.

WMIPY Issues

So you come along one day to validate some service configurations for the SQL Server services and instead of the SQL Server Configuration Manager opening as expected, you are greeted with the following error message.

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager. Invalid namespace [0x8004100e]

Your first thought might be that the issue is being caused by UAC. So you try to run the SQL Server Configuration Manager as administrator. Sadly, that attempt resulted in the exact some error.

What to do? What to do?

Well, the problem is that the WMI provider has become corrupt or removed somehow. This problem can occur whether you have uninstalled a SQL Server instance or not (e.g. some documentation suggests the root cause for this problem is only if you uninstall a SQL Server instance). For me, this issue has arisen many times and never has an Instance been uninstalled. No worries either way because the solution is simple – run a statement similar to the following.

There are two caveats with this script execution: 1) you must run this from an elevated command prompt and 2) you must replace “VERSIONNUMBER” with the correct version number matching your version of SQL Server from the following table.

SQLVersionName

SQLVersionNum

Microsoft SQL Server 2005

90

Microsoft SQL Server 2008 R2

100

Microsoft SQL Server 2008

100

Microsoft SQL Server 2012

110

Microsoft SQL Server 2014

120

Microsoft SQL Server 2016

130

Microsoft SQL Server 2017

140

Microsoft SQL Server 2019

150

If you execute this command and see an error with this code in it (0x80041003) then you have not executed the command with adequate permissions or elevation. You may need to find an administrator for the box and/or execute from an elevated command prompt (e.g. “Run As Administrator).

The Wrap

This article took a look at a rather annoying little error that can crop up out of the blue when trying to open the SQL Server Configuration Manager. Not only do we get to see that error and the cause, but we also get to see an extremely easy fix for the problem. Employing this easy fix will have you back in business administering the SQL Server services in the proper fashion in very short order.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

The Gift of the SPN

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the last opportunity for the year to contribute to this long-standing tradition. This month being right in the middle of the holiday season, we have an opportunity to explore how we have been blessed over the past year or so.

I personally like the topic. I have one comment concerning the topic before diving in too far. I feel it wise to reflect upon how you may have been blessed in various ways on a far more regular basis. Doing so just once a year is far too infrequent. Thanks to Mala (b | t) for bringing this topic up to remind us of how important it is.

In Malas words:

This is a time for material gift giving, for many of us. It might also be a time to consider the many gifts we have received through the year, and perhaps use this opportunity to appreciate people or situations that we were blessed with.

SPN

Service Principal Names (SPN) are used by Kerberos to help associate a service instance to a service logon account. SPNs are useful and powerful little things. That said, sometimes they can cause wonderful little issues with connectivity to SQL Server. One of the more common issues is an error message such as this:

Cannot Generate SSPI Context

As circumstances would have it, the cause of this issue is stated in a kb article as follows.

A Service Principal Name (SPN) for the SQL Server Database Engine may either be missing, misplaced, or is a duplicate to other SPNs configured in the Active Directory of the domain.

I have known many people run out of hair dealing with SPN issues and the dreaded SSPI context error. Gratefully, there are ways to resolve some of these issues and make life just a touch easier. One of the easier ways is a lesser known tool that you can download from Microsoft – here.

The tool is fairly simple to install and use. The one sticking point for using it is that you need to know where to find it so you can use it after it is installed. Here is the path to the executable.

%SystemDrive%:\Program Files\Microsoft\Kerberos Configuration Manager for SQL Server

The link mentioned provides the path and the install instructions along with more info on on how to use the tool from the command line. Personally, I use the tool to discover my SPNs on the server in question and then if I might be missing any.

Upon opening the tool, you are greeted with this functional screen.

From the welcome screen, click on the “Connect” menu option. From there a new screen will open that prompts for the server name, service account and password. However, if you are just connecting to the local server then none of that is required and you can just proceed. After a few minutes, and with a bit of magic, the SPNs are evaluated and a report is generated. Along with this report, there is even an option to “Fix” the missing SPNs if they exist.

This tool really takes some of the room for error out of the equation and makes the job fairly easy. I recommend it. I have shared this with some sysadmins and they found it to be very helpful and time saving. Check it out!

Wrapping it Up

Every once in a while there is an extremely valuable tool that comes along. While the footprint and use frequency of this tool may not be that big, the tool is essential to making the DBA job easier to do. I recommend getting this tool – especially if you have ever run into SSPI issues.

In this article I showed the Kerberos Configuration Manager Tool for SQL Server to help troubleshoot SSPI Context errors and wayward SPNs. Another exceptionally useful tool is Extended Events. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

«page 2 of 70»

Calendar
January 2020
M T W T F S S
« Dec    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Friday, January 24, 2020