Creative Database Naming

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: January 2, 2020

Every now and again, we as DBAs need to get away from the daily grind of the hum drum tasks. We need to find a way to have a little fun (the kind that would be non-harmful and lacking in mal-intent).

Sometimes, this fun can be had through learning opportunities. Sometimes, we can have a little fun through diving deep into SQL Server in some way. At least I know that proves to be enlightening and enjoyable for myself. Sometimes, it is just fun for the sake of a good laugh. What if we could find an opportunity to have a little fun that incorporates learning and a laugh or two? BINGO!

Enter the opportunity to play with emojis, collations, and database (or object) names.

Fun with Characters

Let’s figure we have a requirement to create a database with sensitive data. Due to the sensitivity of the data, it is classified confidential (for your eyes only, don’t talk about it and plug your ears if somebody starts talking about it). This is so sensitive that an apt name for the database could be anything like 🙈 or 🙉 or 🙊. Being smart, you know there are two more databases coming down the line so you only want to pick one of those for the name and not all three (though all three could make sense for a single database name).

Being on top of your game, you prep for all three databases in advance. Here is the script to create the three new databases.

In executing the script, you receive the following results:

Msg 1801, Level 16, State 3, Line 15
Database ‘🙈’ already exists. Choose a different database name.
Msg 1801, Level 16, State 3, Line 17
Database ‘🙉’ already exists. Choose a different database name.
Msg 1801, Level 16, State 3, Line 19
Database ‘🙊’ already exists. Choose a different database name.

You look up and down through your SSMS window and even query sys.databases to ensure those databases are factually not present. You are able to confirm that none or currently present on the server yet the script continues to fail. How could this be? Clearly, the images are each very different.

From the database review, you know there is only one other database with an emoji style name – 👺. What could it be? Well, as it turns out, it is the collation that was chosen for the SQL Server install. The chosen collation doesn’t support a large enough range  of code points so most emojis end up getting mapped to the same thing when stored in SQL Server (topic for much larger article and covered very completely by Solomon Rutzky on his blog –  – here). The short of the issue is the use of supplementary characters and splitting the code point into 2 surrogate codes. For most emojis, this surrogate system results in the “high” surrogate key being the same. Let’s take a look at it.

Notice how each of the monkeys plus the devil mask all translate to the same code point when using the SQL_Latin1_General_CP1_CI_AS (default collation)? This is the root of the problem. If we change to a different server with a better collation (e.g. Latin1_General_100_CI_AS_SC) then we will see vastly different results.

First, let’s validate that the collation on this other instance truly is displaying the correct code points (and it is) instead of just the high surrogate key.

Then let’s go ahead and create each of those three high security databases (seenoevil, speaknoevil, and hearnoevil). Et voila! We now have each of those new databases created and they are indeed unique as we desired (as you can see in the preceding image).

If you want to learn all of the hard core nitty gritty behind this, I recommend reading the blog by Solomon Rutzky – here. Solomon knows his 💩 about the collations and can answer or figure out how to answer just about anything you could throw at him about the topic. His blog is a WEALTH of in-depth knowledge with all sorts of tests and proofs. Check it out!

Put a bow on it

Working as a DBA and creating databases doesn’t have to be the same old hum drum day in and day out. This article showed how to have a little fun while creating databases and database objects. At the same time, I showed how the collation of your server can impact some of your naming schemes.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the ninth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

 

*Post Mortem from Solomon who is the collation genius

And, that brings us to the next fun fact: being able to use more than 1 emoji for database names has nothing to do with whether or not SQL Server sees a surrogate pair as being a single Supplementary Character or merely two surrogate code points. Using SCA collations only affects the behavior of the built-in functions, even though the official documentation says that they help in sorting and comparison (the doc is wrong and I haven’t had time to submit a correction). Being able to name more than 1 database one or more emoji characters is entirely based on how the characters compare to each other, and that is a function of sort weights, which are managed separately. Sort weights do map to code points, but they can change in value depending on the locale/culture being used, as well as the sensitivities selected (i.e. case, accent, etc). The problem you run into with at least most of the version 80 collations (i.e. all Windows collations without a version number in their names, and all SQL Server collations) is simply that they never defined any sort weights for either supplementary code points, or even the surrogate code points used to create surrogate pairs. And with no sort weights defined, their sort value is always 0, and since they are all “0”, they all equate to not only each other, but to anything else with a sort weight of “0”, even an empty string.
Starting with the version 90 collations (even without the “_SC” flag in the name), sort weights were added to the surrogate code points (not to supplementary characters) so that they could at least be distinguished from each other in sorting and comparison operations. And this is why:
  1. you were not able to create the 3 monkey DBs using SQL_Latin1_General_CP1_CI_AS

  2. you were able to create the 3 monkey DBs using Latin1_General_100_CI_AS_SC

  3. you can create those 3 monkey DBs using SQL_Latin1_General_CP850_BIN2, even with that being a SQL Server collation, because binary collations don’t use sort weights, but instead just go by each byte or code unit (depending on BIN or BIN2, respectively)

Cannot Open Backup Device

Your success as a DBA is directly relational to your ability to ensure proper Database backups are occurring. There is a whole slew of considerations that go into effect when discussing what a proper Database backup actually is. To drill it down to its most basic definition, a proper database backup is one that fulfills business requirements for recovery, retention, and has been routinely tested to ensure the backup can be restored.

Remember, a backup that cannot be restored is factually not a backup. Why? Well, going back to the definition of a backup, it is a fail-safe mechanism to be implemented in the event of a failure. If it cannot be implemented (in this case restored), then it provides no value because it cannot be used.

What about when you run into a problem even getting to the point of getting a successful backup operation? For instance, this error can be a bit troubling and may just cause you to lose plenty of time troubleshooting it.

Msg 3201, Level 16, State 1, Line 3
Cannot open backup device ‘\\SomeServer\C$\MSSQL14\MSSQL\DATA\Full_Backup2.bak’. Operating system error 1326(The user name or password is incorrect.).
Msg 3013, Level 16, State 1, Line 3
BACKUP DATABASE is terminating abnormally.

Sure, the error seems to report plenty of adequate information, but you may be surprised at the pain this particular error can cause.

Backup Device Permissions

If you search long enough you will come across numerous articles or forum posts that all pretty much have the same direction. They all point to the problem being a permissions issue. There are a few concerns with the permissions stigma however. In a recent issue for a client, we could verify that permissions were indeed not at fault. To ensure the permissions were not at fault, we granted local admin in the OS as well as sysadmin within SQL Server for the service account. In addition, we also tried a proxy account with full blown access in Windows and in SQL.

To take it a step further, we also ensured that the special permissions for service accounts were also in effect. Those permissions are:

  1. Permission to bypass traverse checking (SeChangeNotifyPrivilege)
  2. Permission to replace a process-level token (SeAssignPrimaryTokenPrivilege)
  3. Permission to adjust memory quotas for a process (SeIncreaseQuotaPrivilege)
  4. Permission to access this computer from the network (SeNetworkLogonRight)

None of this seemed to have any effect when trying to perform a database backup in SQLServer. In our case, we also ensured that the service account had full blown access to the folders as well.

Following the traditional troubleshooting measures, I created a virtual backup device as well only to get the same error. I tried to use the UNC from a remote server and it worked perfectly fine for all accounts in question. What could be going on? Then, I figured why not try the UNC from the server itself. Finally, a clue! Suddenly I was getting prompted for my credentials but my credentials would not work no matter what. Just for giggles, I verified my account was not locked out, and indeed my account was working just fine.

As it turns out, sometimes the problem is not really a permissions issue but it is a veiled security issue. You see, if I use the actual server name instead of the CNAME in all cases above, suddenly everything worked. The issue turns out to be more along the lines of a Windows issue and is documented here (still valid on Windows 2016).

Here is the basic SQL backup test script that led us down the path to the CNAME being the issue.

Put a bow on it

When running into error 1326, it makes plenty of sense to try to create a backup dump device (only as a testing exercise) as well as test the connectivity to the UNC path from the local server instead of a remote server. In my case, the problem was the local server was blocking access to a CName version of the server.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the eighth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Negative Port Numbers

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: December 31, 2019

As a DBA, one of the more critical tasks is to confirm proper setup of a SQL Server. In order to confirm proper setup, I like to validate tcp/ip and port settings. I do this not just through the SQL Server Configuration Manager, but also via query through Management Studio.

Occasionally, when validating the port configurations through this method, you will encounter a fun little puzzle. If you are paying attention to the results, you just may see something that could cause you to scratch your noggin.

Of course, it all depends on the query method you choose to use to perform your validations. There are two easy access methods to use to query for the tcp/ip and port settings. Each may produce slightly different results. Let’s explore these methods.

What’s in a Port?

First let’s go with the easiest of the queries. We will query sys.dm_exec_connections with something that looks like the following.

Executing this query via sqlcmd on my server in question, I might see something like the following in the results.

As you can surmise from the image, the instance I am setting up and validating happens to have multiple IP addresses and a more complex setup because I have it listening on different ports depending on the IP address source. We will focus on the non-default IP address for the purposes of this article. In the local_tcp_port field, you can see that the ports are being reported as expected. And for the sake of simplicity, these ports are correct.

Let’s now divert our attention to the alternative option – CONNECTIONPROPERTY().

In the results window, I see something like this:

Take quick note of the port number I have circled in red. This doesn’t match the original query at all. In fact, it doesn’t come anywhere close to the actual port number. In addition, the port number shown here is a negative value. Obviously a negative port is not correct as TCP/IP ports only range from 0-65535. So what is happening here? Let’s change this query just a little bit and combine the two sources.

The results of this query give me the following.

OK, cool. So we can see that some sort of masking has been created to conceal the port number when it is a “dynamic” port. What is the significance of 65536 though? Well, it just so happens that 16 bits is 65536. This would give us port values of 0-65535 with port 0 being reserved and unusable. Thus, we simply do a little math to figure out what the actual port is by adding 65536 to the negative port value. That is shown in the following code snip.

When calculating the value between sys.dm_exec_connections and ConnectionProperty(), ensure you perform a conversion on the value from ConnectionProperty(). Despite documentation showing that it is an integer value on the port, it is not. The data type for the port value from ConnectionProperty() is actual sql_variant and an implicit conversion won’t work there.

Put a bow on it

Validating your server setup is an integral component of your duties as a SQL Server DBA. When performing those validations, it is possible to run into an intriguing difference in reported port values. That difference of value is easily rectified if you understand that the ConnectionProperty function is doing a bit of a port mask by subtracting 65536 from the actual port number value.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the seventh article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Where is that Mask?

Comments: 1 Comment
Published on: December 28, 2019

It is always a pleasantly alarming situation when all of a sudden a client calls because an automated routine is broken. No matter what the routine is, it’s not the best of news from a client – especially not if it is a routine you helped them migrate to a new server.

I am sure you can hear the klaxon like sound in your head mimicking the panic and shrill anxiety of a department head saying “it has always worked in the past!”

Not the best of news to receive, but also not the least bit cause of panic for you. You remain calm, cool, and collected as you get ready to start troubleshooting the problem.

During a recent client engagement, I had the pleasure of helping them solve a similar problem. Shortly after a server migration and after having proven the process to be properly working, the automated routine started doing something entirely different and new.

It’s Hidden

You see, what once was working suddenly started masking data for a critical component of the process. Due to this change in the data, bank accounts were no longer properly syncing and it was causing a problem with the accounting department who could no longer balance the books. Luckily, the issue was found within a day and the resolution started promptly else it could have been a nightmare.

First order of business, after discovering that the process was indeed running as expected and it was just a change of data, was to figure out what had changed. Was there a code change maybe? Maybe somebody with too many permissions decided to enable a feature on the database side?

We went through and validated everything we could think of. There was no code changes in the related procs, views, or otherwise. Then we went through and validated any sort of database changes such as the enabling of encryption for this key field. There was no encryption or any sort of database feature enabled, but the behavior was eerily similar to some sort of data masking feature having been enabled.

I set out to confirm if data masking was enabled to any extent within the database. The easy way to do something like that is through the use of the following script.

In this particular case, the result of the query was an empty set. There was nothing on the database side enabled that could have caused this sudden change in data behavior. That said, at least we were able to truly confirm that data-masking was not enabled within the database (or any database on the server for that matter.

As it turned out, somebody from the finance team enabled the masking feature from within the application. Since it was entirely application driven, the data was entirely replaced with a masked version of the data so the critical data was overwritten and could not be un-masked. Due to this, a new revision to the process had to be created on the fly to get them back in business.

Put a bow on it

There may be times that data can be changed which will have negative effects on down stream automated processes. When these changes occur, how equipped are we to figure out the root cause? Today, I shared one easy script that could help eliminate one database feature as a culprit to an anomalous data masking suddenly getting enabled. The more tools you have at your fingertips, the better data professional you can be. Add this one to your tool belt!

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the fourth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Easy Audit Index Changes

Comments: No Comments
Published on: December 27, 2019

In the previous article, I showed a somewhat elaborate method that could be used should there be a need to audit changes to your indexes.

As I learned many years ago in school, the elaborate or difficult method may not always be the best method. That said, we often need to learn the more elaborate methods in order to better appreciate the simpler, more efficient methods available. This article will help demonstrate a much better method.

Auditing

When looking for an easy method, one of the first technologies to try really should be Extended Events (xevents). Through the use of xevents, we have a large tool set at our disposal. Not only is the tool set rather large, it is also an extremely flexible multi-faceted tool set giving us greater potential to solve problems with simple tools.

Let’s look at how we can solve this problem through the use of xevents.

In the preceding xevent session, I am taking advantage of three events: object_created, object_altered, and object_deleted. These events will only trigger if an index definition is changed, added, or deleted. They will not trigger if the index is defragged (for instance) or the stats for the index happen to be updated (which is an issue for the stats_date method shown in this article). After deploying this session and then creating an index (shown in the following script), I can expect this session to capture a wealth of information.

In the preceding image, we can see that plenty of pertinent information has been captured. The image does not show the person who made the change, but that information is also captured with this session. Should I wish to focus this session to a specific database, that is definitely possible by removing the comment tags from the database_name predicate in the session script I have shared.

Put a bow on it

This article showed an easy method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the third article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Audit Index Changes

Categories: News, Professional, SSC
Comments: 3 Comments
Published on: December 26, 2019

In the previous article, I showed a quick and dirty method to try and capture the indexes that may have been just created. We all know that in the hectic life of the DBA, it is easy enough to forget the indexes created more than just a few hours in the past.

In that article, I mentioned that the solution provided was far from a complete solution to the entire problem. A bigger part of the picture involves audits to know when and by whom indexes were created, altered or dropped.

Auditing

On many an occasion I have had clients request help with tracking the changes that have occurred to their indexes. The reasons vary slightly but generally boil down to two large categories: 1) to better understand who is making changes and 2) to help ensure proper indexes are not removed due to code promotions (various apps like to wipe out custom indexes which causes performance issues).

With that in mind, the following provides a solution that will help track indexes that have changed (index definition) or that have been added or removed during the audit interval. The interval can be set per your specific requirements. For this specific solution, I have historically just set this to run once a day. The solution is shown below.

To make this solution work, I use several tables and a stored procedure. The stored procedure is scheduled via an agent job and scheduled to your requirements. The tables will stage all of the indexes for all of the databases, then log which have changed or have been added or removed. From there, a simple query to the IndexDefChange table will help me learn of which indexes may deserve some attention.

Put a bow on it

This article showed a method to audit for index changes. The weak link in a solution such as this really boils down to the requirement that the solution needs to be in place before the index change occurs. Otherwise, it would all be for naught.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the second article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Find Index Create Date

Categories: News, Professional, SSC
Comments: 1 Comment
Published on: December 25, 2019

Have you ever been zipping right along working on tuning a query or maybe a handful of queries. Then BAM! All of a sudden you lose the script you were working on. The script just so happened to have a few indexes that you were getting ready to create and you would like to not have to start all over again.

Sure, there are tools out there in this day and age that are supposed to help you recover those scripts. Just suppose that maybe the tool(s) don’t have a record of your script either. Are you up a creek without a paddle at this point? Are you truly forced to start all over again?

Let’s make this predicament even more unfortunate for the DBA. The DBA doesn’t recall the object upon which the indexes were to be created or that maybe upon which the indexes were already created (but the DBA needs to get those indexes to three or four more servers).

A bit Far Fetched?

Does this all seem a bit too implausible or do you know somebody that has experienced something like this? I had a DBA pose this eerily similar scenario to me because it had just happened to him. Let’s be real, crap really does happen at the most inopportune time. We just need to be prepared to deal with it and polish it in the most elegant way we know.

Today, I will show you how to potentially retrieve the indexes that were recently created and maybe, just maybe, save yourself a little time and prevent the dreaded rework. Understand, however, that this is not a bullet proof solution. It is merely a plausible method to potentially recapture the indexes recently created. I will explain in more detail shortly the issues with this method. First the script.

If I run that query on a database, I might see information such as the following shows.

Interestingly enough, all of these indexes were certainly not created on the dates shown in the LastModDate column. As it turns out, the trick I am using (Stats_Date) will give us relative dates of indexes being created but it also gives us the date of the last time the index was updated (or stat was updated). This means you may end up with a ton of indexes to try and sift through to find that index you just created (and forgot the pertinent information). So, while this method does have a use case and can help you prevent rework, it doesn’t do much good for an audit requirement. Stay tuned because the next couple of articles will discuss how to better audit your indexes and get the precise information you need. Not only will you be able to find that recent index addition, but you will also be able to assuredly know when an index was created, altered, or dropped – and by whom.

Put a bow on it

This article showed a quick script to help determine indexes that were created recently. This script will help you out of rough spot and help reduce the chance of rework. That said, you will definitely want to add a bit of auditing on indexes to your environment because while it is helpful, it will not help you in cases where maybe the index was created a few days in the past or to determine when an index may have been dropped.

Interested in more back to basics articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the first article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Ad Hoc Queries Disabled

DBAs and ad hoc queries sometimes go together like oil and water. While we may prefer to avoid the ad hoc queries, sometimes it is the best method to achieve a task.

What exactly is an ad hoc query though? An ad hoc query is a query that should serve a single use purpose (not always the case), is routinely unplanned and quite possibly was untested (again, not always the case). Sometimes, however, the definitions of ad hoc don’t fully apply to queries in SQL Server. One such case is through the use of dynamic type queries and queries issued to linked servers via openrowset.

In my experience, queries used against a linked server and using the openrowset functionality is typically more of a prepared statement. However, it is treated as an ad hoc query. As such, you may encounter an error you may not have been expecting.

Msg 15281, Level 16, State 1, Line 161
SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, search for ‘Ad Hoc Distributed Queries’ in SQL Server Books Online.

This article will help show how to resolve for this error so your queries may go on as intended.

Ad Hoc

Let’s first take a look at a pretty typical type of query I use routinely to check various things on my lab servers.

This is a query that I use (or something like it) to retrieve various Extended Events data from different lab servers. Notice, I use the openrowset method in order to query my linked server. If I do not have the server configured for ad hoc distributed queries then I will receive the error previously noted. The solution for that error is to enable the setting. We can do that via the following query.

Once enabled, then the openrowset queries will work across linked servers. This needs to be enabled on the server that is local in order to send the ad hoc query across to the remote server. With the setting now enabled, running the query that threw the error (from above) now returns the follow results.

The Wrap

This article took a look at an error that may occur depending on your use of linked servers and the use of openrowset. Documentation states that this is something that should be done infrequently and alludes to the issue being resolved through the use of linked servers. In my case, this crops up when using openrowset to query my linked servers. I find that using openrowset to query the linked server is far more reliable, efficient, and better performing. YMMV.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Configuration Manager is Corrupt

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: December 17, 2019

The longer one works in the IT industry, the more likely it becomes to experience a broken system in some capacity. One minute it is working and the next it is not. This is true even of systems that are properly maintained and secured. Stuff happens and pieces break here and there.

I had the distinct pleasure of experiencing this exact problem. One of the essential tools in the DBA tool chest is the SQL Server Configuration Manager. Usually everything works fine here – until it doesn’t. This is a critical component to a well configured server. If nothing else, a DBA should be well enough versed to know that services for SQL Server should be bounced and configured from here. If you were unaware of that requirement – you now know.

Lesser known is the fact that this tool uses WMI to view or change various server settings. WMi allows for a uniform method for the interaction with these services and configurations. Almost all of the time, that is a seamless interaction and the DBA is none the wiser. And then it doesn’t work and the DBA grows more grey hairs.

WMIPY Issues

So you come along one day to validate some service configurations for the SQL Server services and instead of the SQL Server Configuration Manager opening as expected, you are greeted with the following error message.

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager. Invalid namespace [0x8004100e]

Your first thought might be that the issue is being caused by UAC. So you try to run the SQL Server Configuration Manager as administrator. Sadly, that attempt resulted in the exact some error.

What to do? What to do?

Well, the problem is that the WMI provider has become corrupt or removed somehow. This problem can occur whether you have uninstalled a SQL Server instance or not (e.g. some documentation suggests the root cause for this problem is only if you uninstall a SQL Server instance). For me, this issue has arisen many times and never has an Instance been uninstalled. No worries either way because the solution is simple – run a statement similar to the following.

There are two caveats with this script execution: 1) you must run this from an elevated command prompt and 2) you must replace “VERSIONNUMBER” with the correct version number matching your version of SQL Server from the following table.

SQLVersionName

SQLVersionNum

Microsoft SQL Server 2005

90

Microsoft SQL Server 2008 R2

100

Microsoft SQL Server 2008

100

Microsoft SQL Server 2012

110

Microsoft SQL Server 2014

120

Microsoft SQL Server 2016

130

Microsoft SQL Server 2017

140

Microsoft SQL Server 2019

150

If you execute this command and see an error with this code in it (0x80041003) then you have not executed the command with adequate permissions or elevation. You may need to find an administrator for the box and/or execute from an elevated command prompt (e.g. “Run As Administrator).

The Wrap

This article took a look at a rather annoying little error that can crop up out of the blue when trying to open the SQL Server Configuration Manager. Not only do we get to see that error and the cause, but we also get to see an extremely easy fix for the problem. Employing this easy fix will have you back in business administering the SQL Server services in the proper fashion in very short order.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 2 of 26»

Calendar
June 2020
M T W T F S S
« May    
1234567
891011121314
15161718192021
22232425262728
2930  

Welcome , today is Tuesday, June 2, 2020