Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

The Gift of the SPN

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the last opportunity for the year to contribute to this long-standing tradition. This month being right in the middle of the holiday season, we have an opportunity to explore how we have been blessed over the past year or so.

I personally like the topic. I have one comment concerning the topic before diving in too far. I feel it wise to reflect upon how you may have been blessed in various ways on a far more regular basis. Doing so just once a year is far too infrequent. Thanks to Mala (b | t) for bringing this topic up to remind us of how important it is.

In Malas words:

This is a time for material gift giving, for many of us. It might also be a time to consider the many gifts we have received through the year, and perhaps use this opportunity to appreciate people or situations that we were blessed with.

SPN

Service Principal Names (SPN) are used by Kerberos to help associate a service instance to a service logon account. SPNs are useful and powerful little things. That said, sometimes they can cause wonderful little issues with connectivity to SQL Server. One of the more common issues is an error message such as this:

Cannot Generate SSPI Context

As circumstances would have it, the cause of this issue is stated in a kb article as follows.

A Service Principal Name (SPN) for the SQL Server Database Engine may either be missing, misplaced, or is a duplicate to other SPNs configured in the Active Directory of the domain.

I have known many people run out of hair dealing with SPN issues and the dreaded SSPI context error. Gratefully, there are ways to resolve some of these issues and make life just a touch easier. One of the easier ways is a lesser known tool that you can download from Microsoft – here.

The tool is fairly simple to install and use. The one sticking point for using it is that you need to know where to find it so you can use it after it is installed. Here is the path to the executable.

%SystemDrive%:\Program Files\Microsoft\Kerberos Configuration Manager for SQL Server

The link mentioned provides the path and the install instructions along with more info on on how to use the tool from the command line. Personally, I use the tool to discover my SPNs on the server in question and then if I might be missing any.

Upon opening the tool, you are greeted with this functional screen.

From the welcome screen, click on the “Connect” menu option. From there a new screen will open that prompts for the server name, service account and password. However, if you are just connecting to the local server then none of that is required and you can just proceed. After a few minutes, and with a bit of magic, the SPNs are evaluated and a report is generated. Along with this report, there is even an option to “Fix” the missing SPNs if they exist.

This tool really takes some of the room for error out of the equation and makes the job fairly easy. I recommend it. I have shared this with some sysadmins and they found it to be very helpful and time saving. Check it out!

Wrapping it Up

Every once in a while there is an extremely valuable tool that comes along. While the footprint and use frequency of this tool may not be that big, the tool is essential to making the DBA job easier to do. I recommend getting this tool – especially if you have ever run into SSPI issues.

In this article I showed the Kerberos Configuration Manager Tool for SQL Server to help troubleshoot SSPI Context errors and wayward SPNs. Another exceptionally useful tool is Extended Events. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Why oh Why? Effects of Antivirus on a DB Server

TSQL Tuesday

*shrug* my post is late due to technical issues.

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the 10 yr anniversary of this long standing blog party / tradition. For the 10 yr party, everybody has the opportunity to bag elaborate on something questionable they have seen being done as it relates to SQL Server.

This is a topic that I know has sparked a lot of commentary by the host Wayne Sheffield (b | t). In Wayne’s invite, he says “Not too long ago, I ran across a situation where I was scratching my head, wondering why something had been implemented the way it had been (you can read about it here). And that gave me the idea for this T-SQL Tuesday topic.

Take a moment to read about his “aha this is odd” moment and then you can totally see the tone that was intended for this months topic.

Dubious Backups

My topic (be it ever so late) is about some rather large frustrations with phantom backup processes that were interfering with the normal backup operations, production performance, and were running at random intervals throughout the day sometimes as much as 12 times during normal business hours.

Every single time these backups kicked off, we would get I/O frozen alerts and failed logins until the I/O was unfrozen. Sometimes it would last as little as seven seconds and sometimes lasting for an uncomfortable minute or so.

In working with the sysadmins they kept insisting that it must have been something configured within SQL Server. Each time they insisted, we would have to rehash the scheduled backups in SQL Server as well as validate any windows scheduled tasks. Here is what some of those backups looked like from within SQL Server.

Each of those queries produces results similar to the following.

The preceding image clearly shows that there have been multiple full backups of the DBA database each day and that there is a variable quantity of these backups each day.

In this preceding image we see more details about these full backups that have been occurring. We know that the backups are not coming from the SQL Service account. We also know that these are not SQL backups but rather they are coming from an outside entity that is causing the backups to be sent to a virtual device. We know a virtual device is the target due to the GUID for the BackupPath instead of an actual file path.

If I take this a step further and try to trace it via an Extended Events session, I can further confirm that the backups are from an outside entity and that it appears to be something that is using the windows shadow copy service – or something like that. Let’s look closer at that.

League of Shadows

Here we see the properties for the volume shadow copy service for all volumes (including those involved with SQL Server). Intriguing to see that despite the service being disabled for every volume, there appears to still be something causing there to be storage consumed towards the VSS quota. This supports that something is causing the backups to be performed via VSS somehow but it is not the actual Windows service because it is disabled for each volume.

Let’s investigate further. Time to break out some command line queries.

Running that command, I can compare all scheduled tasks and their run times against the actual backups that were occurring and determine if any might be related to the backups. As it turned out, none of the tasks in Task Scheduler ran at any time remotely close to these backups.

Next, let’s look at the VSS configurations more closely. I can query the providers, shadows and shadowstorage.

Looking at the providers first to determine if there is something there, we might be introduced to something like this output.

Maybe there is an aha for you at this moment. Two services are certainly worth investigating: “CommVault VSS Hardware Provider Service” and “Galaxy VSS Provider Service” (both are related to CommVault). In this particular case, we exhausted those avenues and were able to determine that neither was related to this issue.

Let’s try the shadows.

A couple of things of note here. The timestamps from these vss shadows are consistent with FULL database backup time stamps in SQL Server. Next, we see that these are designed to serve as an “ApplicationRollback” and Differential backup. So, something is taking an application consistent differential backup every so often for some reason yet unknown. Despite the VSS differential backup, it is not a differential backup inside SQL Server – it is a FULL backup. This configuration can be detrimental to your recovery plan if you are unaware of it (some ramifications here).

So, what is causing this? Well, one more clue actually helped us figure it out. The issues started on August 20th. On that day, a new Antivirus software was rolled out to the servers. This software, unbeknownst to the sysadmins, was taking VSS snaps to try and protect against ransomware – supposedly. I have to say, that is a pretty dumb thing to do to a database server. A legitimate backup / recovery plan would suffice in the case of a hijacked OS and without the added burden of the sanpshots from the AV software nor the added stress of having something screw with the backup/recovery plan.

This is a prime example of using a wrecking ball to hammer in a screw. All in the guise of security and protection. Please stop the insanity.

Making matters more difficult is the fact that the GUIDS provided by the VSS storage and what we see listed as the virtual device in SQL Server do not correlate directly to each other. You have to base your assertions off of timestamps.

If your sysadmins insist on having a tool (whether it be Antivirus or a backup solution) that causes VSS snaps to occur which could foul up the database recovery plan, then I recommend you insist on having access to that tool.

Wrapping it Up

I am not a fan of tools that interfere with the database recovery plans that may be requisite to properly meet RTO and RPO. At best these phantom backups cause undue headache in troubleshooting. At worst, they make it impossible to recover in the event of a database related disaster. Sadly, the database is regarded as the red-headed step child and these tools ignore best practices for the database world. Forcing a database server to fit in the same mold as a file or print server is flat out absurd and these tools make no effort to educate the sysadmins of the piss-poor practices they are forcing down the throat of your database server.

In this article I showed various tools in the command line as well as within SQL Server to troubleshoot these problematic backups. One of the tools was an XEvent Session. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Finding Data Gaps

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: July 11, 2019

Quality data is a goal we all strive to attain. Nobody wants to have bad data. Yet, despite our best efforts, the problem of bad data has its way of presenting itself over and over again.

One fairly common indicator of bad data is maybe an absence of data. Maybe you have a table in the database with what is expected to be contiguous data, yet somehow there appears to be an absence of data here or there. Maybe there is an absence of the contiguous data and it hasn’t been noticed.

This phenomenon is also known as the gaps and islands issue in the data realm. How do you know if you have an issue or not? This article will help you with a sample query that can find those gaps of missing data. From there, your journey is just beginning.

Finding the Gaps

I have this database that I use for mentoring DBAs of various levels (help desk on up). In the database, I have a schema dedicated for daily trivia. Since the intent is for it to be a daily trivia question, I expect to have a question for every day during the time interval in question. Every now and again, however, I forget to upload a question or I mistakenly believe I have already loaded a question for the day. Typically, I try to do this in advance so as to not be scrambling the last minute to get a question built for the email to pick it up.

Due to the factors just explained, I end up with gaps in my daily question data. The question now becomes how to find that I have gaps. I have a couple of queries here that will save you plenty of time if you follow the same principles applied here.

First let’s start with a bit of a tricky version of the query that builds a numbers table and then spits out all of the dates that are missing a question (only for the range of data within the table that is).

This particular numbers table trick has been around for quite a while and runs fast enough. However, if you are performance nut, and for this particular case, there is a method that is a wee bit better (usually the dynamic numbers table just shown is sufficiently fast).

This method is the much maligned recursive CTE method. In my testing it runs consistently faster with a lower memory grant but does cause a bit more IO to be performed. Some trade-off to be considered there. Both queries are returning the desired data-set which happens to be my missing question days. Only, I have added an extra output in the second query to let me know the day of the week that the missing question occurred on. Maybe I forgot to enter it because it was a weekend day or maybe I opted to not create one at all because the day lands on a Holiday. Let’s take a small peek at the results.

That is a pretty good swath of missing data there. If you look closely, not all days from November 10th through December 1st are missing, just a handful. Alas, it looks like I have some work cut out for me because I have some weekdays, weekends, and Holidays in there as well as nearly the entire month of December. Finding the gaps is the easy part once you have a solid query to start with and I believe this query is just the right tool for that.

What if I wanted to check for a rolling period of time instead of just checking for valid data between a min and max range of data already in the table? That is easy enough as well with some minor tweaks. Let’s look at one more query option here.

This third code snippet illustrates a slight variation over the previous two in that it utilizes the Table Value Constructor to build the numbers table. All in all, it runs with blazing high speed and produces the results I am seeking – but for a rolling year instead of a static range of data.

The Wrap

Gaps and islands will occur in your data. When it does, you have your choice of three high performing scripts to help you investigate where the gaps and islands exist in your data.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 1 of 133

Calendar
December 2019
M T W T F S S
« Nov    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Welcome , today is Thursday, December 12, 2019