Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Finding Data Gaps

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: July 11, 2019

Quality data is a goal we all strive to attain. Nobody wants to have bad data. Yet, despite our best efforts, the problem of bad data has its way of presenting itself over and over again.

One fairly common indicator of bad data is maybe an absence of data. Maybe you have a table in the database with what is expected to be contiguous data, yet somehow there appears to be an absence of data here or there. Maybe there is an absence of the contiguous data and it hasn’t been noticed.

This phenomenon is also known as the gaps and islands issue in the data realm. How do you know if you have an issue or not? This article will help you with a sample query that can find those gaps of missing data. From there, your journey is just beginning.

Finding the Gaps

I have this database that I use for mentoring DBAs of various levels (help desk on up). In the database, I have a schema dedicated for daily trivia. Since the intent is for it to be a daily trivia question, I expect to have a question for every day during the time interval in question. Every now and again, however, I forget to upload a question or I mistakenly believe I have already loaded a question for the day. Typically, I try to do this in advance so as to not be scrambling the last minute to get a question built for the email to pick it up.

Due to the factors just explained, I end up with gaps in my daily question data. The question now becomes how to find that I have gaps. I have a couple of queries here that will save you plenty of time if you follow the same principles applied here.

First let’s start with a bit of a tricky version of the query that builds a numbers table and then spits out all of the dates that are missing a question (only for the range of data within the table that is).

This particular numbers table trick has been around for quite a while and runs fast enough. However, if you are performance nut, and for this particular case, there is a method that is a wee bit better (usually the dynamic numbers table just shown is sufficiently fast).

This method is the much maligned recursive CTE method. In my testing it runs consistently faster with a lower memory grant but does cause a bit more IO to be performed. Some trade-off to be considered there. Both queries are returning the desired data-set which happens to be my missing question days. Only, I have added an extra output in the second query to let me know the day of the week that the missing question occurred on. Maybe I forgot to enter it because it was a weekend day or maybe I opted to not create one at all because the day lands on a Holiday. Let’s take a small peek at the results.

That is a pretty good swath of missing data there. If you look closely, not all days from November 10th through December 1st are missing, just a handful. Alas, it looks like I have some work cut out for me because I have some weekdays, weekends, and Holidays in there as well as nearly the entire month of December. Finding the gaps is the easy part once you have a solid query to start with and I believe this query is just the right tool for that.

What if I wanted to check for a rolling period of time instead of just checking for valid data between a min and max range of data already in the table? That is easy enough as well with some minor tweaks. Let’s look at one more query option here.

This third code snippet illustrates a slight variation over the previous two in that it utilizes the Table Value Constructor to build the numbers table. All in all, it runs with blazing high speed and produces the results I am seeking – but for a rolling year instead of a static range of data.

The Wrap

Gaps and islands will occur in your data. When it does, you have your choice of three high performing scripts to help you investigate where the gaps and islands exist in your data.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Event Files on Linux

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, just about every SQL Server Data professional is being asked to get out of their comfort zone and talk about Linux (there are some out there that are very comfortable with Linux, but the vast majority are not 😉 ).

This is a topic that is near and dear to the heart of our organizer Tracy Boggiano (b | t). In Tracy’s invite, she says “While I know it takes a while to adopt new technologies; I was wondering what it would take for people to adopt SQL on Linux.  Alternating I’m offering up for you to blog about what everyone should know when working with SQL on Linux or anything else related to SQL running on Linux.

That pretty much leaves the door wide open, right?

Event Files

For the most part, things work the way you might expect them to work in windows – except it is on Linux. Sure some things are different, but SQL Server itself, is largely the same. That similarity, for the most part, boils all the way down into Extended Events (XEvents) as well. There is one significant divergence, however. And that is when it comes to specifying your event_file target for a new session. Let’s dive into that just a bit.

Let’s take a common setup for an XEvent session.

This will fail before the query really even gets out of the gate. Why? The proc xp_create_subdir cannot create the directory because it requires elevated permissions. The fix for that is easy enough – grant permissions to write to the Database directory after creating it while in sudo mode. I will get to that in just a bit. Let’s see what the errors would look like for now.

Msg 22048, Level 16, State 1, Line 15
xp_create_subdir() returned error 5, ‘Access is denied.’
Msg 25602, Level 17, State 23, Line 36
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 5: ‘Access is denied.
‘ while creating the file ‘C:\Database\XE\PREEMPTIVE_OS_PIPEOPS_0_132072025269680000.xel’.

Let’s resolve the folder issue. I will create the Database folder (from a console session on the server), and then I will take ownership of that folder.

From there, it is also advisable to grant permissions to this folder to the SQL group via the chgrp command. Once done, re-running the entire session creation will magically work – including that windows based create subdir proc.

Alternative #2

Maybe we just want to do things via the default method. If so, we can do this and it just works.

Alternative #3

And of course, there is always this option. Unfortunately, this means keeping multiple session scripts in source control in order to maintain the differences between Windows and Linux as illustrated in these two examples.

With all three sessions now on the server, I can query my target paths to confirm the correct paths have been used.

This produces the following results for me.

Perfect, everything looks to be working and properly configured. Well, except for that session that is using the default directory – gag.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to create an Event Session on SQL Server that is running on Linux. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Is That Database in Use?

Does anybody know who or what is using this database? Better yet, is the database being used by anything at all? Have you ever asked those types of questions of your staff in an environment you have inherited?

As silly as it might sound, these are legitimate questions. It is not uncommon to have a relic database hanging around after an application has long since been retired. The database just continues to live on via life support in the dark recesses of your database server. Everybody has forgotten about it yet it consumes precious resources to do little more than exist.

But how do you go about discovering if the database is truly required or if it is just chewing up space and cpu cycles? The answer is to start by discovering if there is any activity in the database at all. Extended Events is an ideal method to audit your database and discover if there is truly activity on the database or not.

DB In Use?

Depending on the version of SQL Server you are using, the method is going to be a little bit different. Each method I use still relies on Extended Events (XE), they just require some slight changes and subtle nuances (particularly for those pesky 2008 and R2 instances out there). Let’s focus on the 2008 method this time around and we will revisit the updated version in a later article.

2008/R2

Let’s first start with the XE session that works well for discovering the frequency of use a database may be encountering.

Inline with the script, I left some interesting notes. First, the note in the target section is of particular interest because it lays out a requirement for this kind of target in 2008/R2. When using the action source type, then the package name must be specified along with the action in two-part naming format.

Next, in the actions, I have a note about sqlserver.database_context. This particular action is deprecated in later versions. Do I need it in this session? No. I put it in there for additional troubleshooting/exploration.

Lastly, I have a note about the event name. I chose database_transaction_begin because this is fired when a database transaction starts. I don’t really care if the transaction completes. I just want to know if an attempt was made to use that database and the data therein.

If I comment out the deprecated action (database_context), I could actually use this session on later versions (as constructed) of SQL Server. It doesn’t matter that the asynchronous_bucketizer has been replaced by the histogram target, the session will still create and work properly. SQL Server knows to automatically update the deprecated targets with the appropriate target when creating an XE Session on a server.

Since this is 2008/R2, we have to use TSQL in order to parse the data. The following query will do that for us!

After executing that query, I will see results listing the activity of each database since the session was started. Here’s an example.

Here we can see there are indeed some databases that are still in use on this server. If we are looking to retire the instance, or migrate relevant databases to a new server, we have better information about how to go about planning that work. Based on this data, I would be able to retire the ProtossZealot database but would be required to do something with the ZergRush and BroodWar databases. In the case of the ProtossZealot database that is not being used, we now have evidence to present back to the team that the database is not used. It is now up to those insisting on keeping it around to justify its existence and document why it must remain in tact.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrates how to use Extended Events to determine if a database is being used by someone or something. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Join a List of Values

Comments: 1 Comment
Published on: July 2, 2019

There are multiple ways of retrieving data from a table when you have a list of values. Sometimes, the method is straight-forward and sometimes we just can’t see the right solution.

Some of the methods to retrieve data from a table (based on a list of values) might include one or multiple of the following: join to another table, throwing an in clause at it, dumping the values into a temp table, sub-queries, and a table value constructor.

Let’s focus on the table value constructor. This one seems to trip a few people up from time to time. First off, what is it?

Table Value Constructor

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

So, this virtual table contains a set of values that we define. Let’s go ahead and take a look at an example set of data that we can cultivate into a table value constructor.

Now, let’s say that I want to get all of the trace events that might correlate to those specific values, all I need to do is add this list of values to a query in the appropriate place. First, let’s see what it looks like when I add it via a CTE.

In this case, I have thrown the table value constructor into a CTE. There could be a few good reasons to do it this way such as it being easier to read or maybe some corporate style policy. For me, I like the readability and the tight grouping of the join clauses without needing to scroll. Running the query in this case would produce results similar to the following.

Now, what if I really prefer to not use a CTE? Easy enough, let’s look at this next example. I will use the same list of values as I search for various wait_types.

Notice here that I have a little bit of a different query. I only have one table to join to after the table value constructor. That said, it works just the same as the CTE example. Once the data is constructed, I alias it like a table, then I define the column names. This query will now return results similar to the following.

Both of these examples have been pretty easy in building a single attribute array for the table value constructor. What if we had the need for multiple columns in our list of data?

Notice the addition of an extra value (separated by a comma) within each set of parenthesis and the additional column definition added after the alias for the table value constructor. This is a pretty simple modification. Upon execution, I can expect results such as the following.

The Wrap

Being able to take a list of data and construct it into a virtual table for further querying and data analysis is a pretty cool thing to do. The setup and syntax are fairly simple – it just takes a bit of practice and effort.

This article demonstrated three different examples of using a table value constructor to join a list of data to other tables. See what other kinds of examples you can come up with and share them.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

Detail Job History – Back to Basics

Recently, I covered the need to understand job failure frequency and knowing the value of your SQL Agent jobs. You can read the specifics in the article – here.

Sometimes, just knowing the frequency of the job failure is good enough. Sometimes, more information is helpful. Having more information is particularly helpful when meeting with the business to discuss the validity of the job in question.

What do you do in times like this? The most basic answer to that question is – get more data. But that barely scratches the surface. The real question being asked there is how do you go about gathering that data?

There are two methods to gather the data – the hard way and the easy way. Do you like to work hard? Or would you rather work more efficiently?

Efficiency Matters

As was discussed in the previous article, I prefer to do things just a little bit less manually where possible. The consistency of a script matters, but it also is just so much faster than doing things the hard, manual, iterative way. So let’s build a little bit on the script from the previous article.

And here is a sample of the output.

With this script, I have the ability to quick show which step is failing, what the command is for that step, what kind of process is running on that step, any passwords (in the event of an SSIS password), and of course the failure frequency. This is golden information at the fingertips. There is no need to click through the GUI to gather this information. You can get it quickly and easily in one fell swoop.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Database File Changes

Data professionals around the globe are frequently finding themselves occupied with figuring out why and when a file (data or log) for a database has changed in size. Whether that change is a growth or shrink, or if the change was expected to happen or not.

I have written previously about the need to monitor these changes and how to do it more efficiently (there is even a fail-safe that occasionally works. As SQL Server improves, so does our ability to capture these types of events.

Given that SQL Server has undergone many enhancements, let’s take a look at the enhancements related to capturing the database file size changes.

Database File Related Events

If you read the articles mentioned previously, you will have encountered a previous article that include an Extended Event session that can be used for tracking file changes. That session is really good enough in most cases. One major change that I would suggest off the top is the asynchronous file target. While that target will work on all editions of SQL Server since 2008, the name of the target was changed in SQL Server 2012 to event_file. No big deal there. The XEM file is also no longer necessary, so that piece can just be wiped out.

That said, what else has changed? Let’s cover some deprecated events that may get you frustrated if you encounter them. The following events should be avoided because they will do nothing (a couple of them have been covered in this previous article).

  • sqlserver.databases_log_growth – Databases log growth
  • sqlserver.databases_log_file_size_changed – Databases log file size changed
  • sqlserver.databases_data_file_size_changed – Databases data file size change
  • sqlserver.auto_shrink_log – Auto shrink log ended.

Each of the preceding events have been replaced by a single event called database_file_size_change.

And then there is this one that I can’t seem to get to generate any events but it might possibly still be valid. In short, don’t be surprised one way or the other if it does something.

  • sqlserver.auto_shrink_log_caught_exception – Auto shrink log caught an exception.

Great, we have some events we can avoid. Now let’s look at some events we should consider.

  • sqlserver.database_file_size_change – Occurs when any of the data or log files for a database changes size. Use this event to monitor changes to the size of database files.
  • sqlserver.databases_shrink_data_movement – Databases shrink data movement
  • sqlserver.databases_log_file_used_size_changed – Databases log file used size changed (this one gets noisy – use judiciously).
  • sqlserver.databases_log_shrink – Databases log shrink

Now that we have some events that are viable as well as some events to avoid, let’s put a session together.

All we need to do at this juncture is test the session.

Let’s drill into the database_file_size_change event and take a closer look.

There are a few things going on with this event. As previously mentioned, this event captures multiple different types of size related events. If you now look at the sections that I have circled in red, you will note that there is a flag that tracks if the event was an automatic size change. It doesn’t matter if it was a growth or shrink, both can be automatic. The way we can tell if it was an automatic shrink is due to the negative value in the size_change_kb field.

Next up, we have the green highlighted section. I have three distinct timestamps circled and separated by a dotted red line. Note the time difference between each of the groups. Each group of events is separated by 30 minutes. As it turns out, if you have Autoshrink enabled on your database, the timer is a 30 minute interval. Yes! Autoshrink just so happened to be enabled on this database – for testing purposes.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to use Extended Events to monitor for file size changes and the same principles can be applied to any of the waits you may need to investigate. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Job History – Back to Basics

How necessary is that SQL Server Agent job that you have running on the server? I ask that question of my clients on a routine basis.

Sometimes, I will ask that question as a routine part of a health check for the server. Others, it might be due to a migration or upgrade that is required for the server. Generally, the answer to the question will be one of two things: a) “Yes it is necessary.” or b) “What is that job doing?”.

Believe it or not, both answers will typically spawn more questions. You see, I don’t usually ask that question unless the job is failing on a regular basis. You wouldn’t believe how many jobs exist out there that are scheduled and just fail every time as well.

When I encounter a situation like this, it means it is time to have a discussion. In order to have that discussion, one needs to have done a bit of homework in order to better understand the situation. For me, part of this homework involves running the numbers and figuring out the frequency of the job’s failure or success.

Data Gathering

For me, I like to understand how often a job has executed and what is the frequency of failure for that quantity of executions. If I see a job that has not succeeded successfully in 60 consecutive executions, it is probably a safe bet that the job is not needed. Why? Well, if nobody has noticed the job hasn’t been working for that long, the likelihood of the job providing any use to the business is slim to none. In this case, I would present a case to the business as to why it should be removed.

But, how do I get to that point? Well, you could go through the job history for each job one by one and run some manual analytics. Or, you could take advantage of a script. I prefer the script route because it is faster, more reliable and a lot less mundane.

Running that script against my sandbox, I may see something such as the following.

Here you will note that the “wtf” job has two entries. One entry for “Succeeded” (in green) and one entry for “Failed” (in red). Each row receiving counts for number of executions.

This is the type of information I can use in a meeting to discuss with the business. This is no longer a discussion of opinion, but rather one that is now based on facts and data. It now becomes very easy to demonstrate to the business that a job has failed 60/60 times and nobody noticed it or cared enough about the consistent failures to do anything about it. Imagine if the failing job happens to be the database backups. I wonder what the action items for that job failure might include.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Public Role Permissions – Back to Basics

Permissions in the database environment is such an important task. Setting permissions correctly is paramount to a successful audit and one of your best defenses against improper/unwanted access. Yet, in spite of the importance, security is often overlooked, neglected, improperly configured or just flat out ignored. Let’s not forget the times that security is intentionally misconfigured so certain individuals are permitted a backdoor access.

Security, just like performance tuning, is a perpetual (and sometimes iterative) task. There is no excuse for setting your security and forgetting it. It must be routinely reviewed.

While performing a script review for a client, I was reminded of the need to also review their security setup. The reminder was more of a slap in the face as I noticed that the developer had built in some permissions assignments for some upgrade scripts. Unfortunately, we were not permitted to alter any of the scripts due to them being from a third party vendor (and that vendor refused as well to fix the problems with the scripts but I digress).

What could be wrong with this?

I want you to digest that for just a moment. This is an example of the permissions this particular vendor insists on setting for the public role. What could possibly be wrong with that? Let’s examine a couple of the permissions like “Control” and “View Change Tracking”.

View Change Tracking

This permission is an elevated permission that is required in order to use the change tracking functions. This permission is necessary for the following reasons:

  1. Change tracking records contain the PK value for rows that have been deleted. If sensitive information was deleted that a user should not be able to access, the user would be able to reverse engineer the data from the change tracking data.
  2. A user may be denied access to a column that contains sensitive data. If data in the column is changed, the data would be stored in the change tracking and a user can determine the values that were updated for the sensitive data.

Control

I am going to take this one direct from the Microsoft documentation.

Confers ownership-like capabilities on the grantee. The grantee effectively has all defined permissions on the securable. A principal that has been granted CONTROL can also grant permissions on the securable. Because the SQL Server security model is hierarchical, CONTROL at a particular scope implicitly includes CONTROL on all the securables under that scope. For example, CONTROL on a database implies all permissions on the database, all permissions on all assemblies in the database, all permissions on all schemas in the database, and all permissions on objects within all schemas within the database.

Now digest that a bit. Once digested, consider what the public role does to user access in a database. The public role permissions are inherited by all users of the database whether the users have been granted the permission or not. You should only grant permissions to the public role that you really honestly believe that ALL users should have. If you are being serious in your role, then the amount of times you grant permissions to the public role should either be a) never, b) when you want to have a data breach, or c) you are testing in a sandbox to improve your skills.

Check for Perms

When you are uncertain of which permissions have been assigned to the public role, or you just haven’t reviewed your permissions real-estate in some time, it is best to pull out a script and start the process. As luck would have it, I have a few scripts that can help with that (here or here) and I have a new one that I am sharing now.

Let’s start with a basic query that will display all of the permissions assigned to the public role in a specific database.

There is nothing super special about this query. Looking at it, it is querying the permissions for the public role specifically. I display where the permission is a “Deny” or “Grant”. Then we list the permission name and then the schema and the object.

Let’s take that script and evolve it now. I am going to plan for the worst and expect that some permissions have been applied that shouldn’t have by some vendor upgrade script (because – well, history). Since I am expecting the worst, I am going to add some script generating code that will revoke the unwanted permissions. And still expecting the worst would be that revoking the permissions will break something, I will also add some code that can generate the appropriate “Grant” statements.

That looks better. I have a way of identifying the unwanted permissions as well as an easy script I can execute to remove the unwanted permissions. Note the use of the collate in the final two columns. As it turns out, permission_name from sys.database_permissions has a column collation of Latin1_General_CI_AS_KS_WS. Since I ran into some errors (shown below), it is easier to direct the DB engine to use the collation that matches the permission_name column.

Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 5.
Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 6.

Alas, this is still not quite as efficient of a script as I would like. I may have hundreds of databases on the instance and need to evaluate all of them. Time for the bigger guns.

That will take care of all of the permissions for the public role in all of the databases, with a slight caveat. I am only checking against that objects that are not flagged as is_ms_shipped. Now, isn’t there also a public role at the server scope? Indeed there is! Let’s also capture those permissions.

Now, I feel many times better about what could possibly be going wrong with the public role.

If you are in a tightly controlled environment or you are just sick of people doing this sort of thing to your servers, there are more extreme measures that can be taken. You can read about it here or here.

The Wrap

It is amazing what some people will do that just doesn’t make sense. Granting permissions to the public role is one of these cases. That behavior also explains why there are documents and procedures for hardening the public role (here and here).

If necessary, I recommend locking down your public role. It will make your job a little easier and give you better rest at night.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 1 of 71

Calendar
November 2019
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
252627282930  

Welcome , today is Thursday, November 14, 2019