When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

The Gift of the SPN

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the last opportunity for the year to contribute to this long-standing tradition. This month being right in the middle of the holiday season, we have an opportunity to explore how we have been blessed over the past year or so.

I personally like the topic. I have one comment concerning the topic before diving in too far. I feel it wise to reflect upon how you may have been blessed in various ways on a far more regular basis. Doing so just once a year is far too infrequent. Thanks to Mala (b | t) for bringing this topic up to remind us of how important it is.

In Malas words:

This is a time for material gift giving, for many of us. It might also be a time to consider the many gifts we have received through the year, and perhaps use this opportunity to appreciate people or situations that we were blessed with.

SPN

Service Principal Names (SPN) are used by Kerberos to help associate a service instance to a service logon account. SPNs are useful and powerful little things. That said, sometimes they can cause wonderful little issues with connectivity to SQL Server. One of the more common issues is an error message such as this:

Cannot Generate SSPI Context

As circumstances would have it, the cause of this issue is stated in a kb article as follows.

A Service Principal Name (SPN) for the SQL Server Database Engine may either be missing, misplaced, or is a duplicate to other SPNs configured in the Active Directory of the domain.

I have known many people run out of hair dealing with SPN issues and the dreaded SSPI context error. Gratefully, there are ways to resolve some of these issues and make life just a touch easier. One of the easier ways is a lesser known tool that you can download from Microsoft – here.

The tool is fairly simple to install and use. The one sticking point for using it is that you need to know where to find it so you can use it after it is installed. Here is the path to the executable.

%SystemDrive%:\Program Files\Microsoft\Kerberos Configuration Manager for SQL Server

The link mentioned provides the path and the install instructions along with more info on on how to use the tool from the command line. Personally, I use the tool to discover my SPNs on the server in question and then if I might be missing any.

Upon opening the tool, you are greeted with this functional screen.

From the welcome screen, click on the “Connect” menu option. From there a new screen will open that prompts for the server name, service account and password. However, if you are just connecting to the local server then none of that is required and you can just proceed. After a few minutes, and with a bit of magic, the SPNs are evaluated and a report is generated. Along with this report, there is even an option to “Fix” the missing SPNs if they exist.

This tool really takes some of the room for error out of the equation and makes the job fairly easy. I recommend it. I have shared this with some sysadmins and they found it to be very helpful and time saving. Check it out!

Wrapping it Up

Every once in a while there is an extremely valuable tool that comes along. While the footprint and use frequency of this tool may not be that big, the tool is essential to making the DBA job easier to do. I recommend getting this tool – especially if you have ever run into SSPI issues.

In this article I showed the Kerberos Configuration Manager Tool for SQL Server to help troubleshoot SSPI Context errors and wayward SPNs. Another exceptionally useful tool is Extended Events. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Why oh Why? Effects of Antivirus on a DB Server

TSQL Tuesday

*shrug* my post is late due to technical issues.

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the 10 yr anniversary of this long standing blog party / tradition. For the 10 yr party, everybody has the opportunity to bag elaborate on something questionable they have seen being done as it relates to SQL Server.

This is a topic that I know has sparked a lot of commentary by the host Wayne Sheffield (b | t). In Wayne’s invite, he says “Not too long ago, I ran across a situation where I was scratching my head, wondering why something had been implemented the way it had been (you can read about it here). And that gave me the idea for this T-SQL Tuesday topic.

Take a moment to read about his “aha this is odd” moment and then you can totally see the tone that was intended for this months topic.

Dubious Backups

My topic (be it ever so late) is about some rather large frustrations with phantom backup processes that were interfering with the normal backup operations, production performance, and were running at random intervals throughout the day sometimes as much as 12 times during normal business hours.

Every single time these backups kicked off, we would get I/O frozen alerts and failed logins until the I/O was unfrozen. Sometimes it would last as little as seven seconds and sometimes lasting for an uncomfortable minute or so.

In working with the sysadmins they kept insisting that it must have been something configured within SQL Server. Each time they insisted, we would have to rehash the scheduled backups in SQL Server as well as validate any windows scheduled tasks. Here is what some of those backups looked like from within SQL Server.

Each of those queries produces results similar to the following.

The preceding image clearly shows that there have been multiple full backups of the DBA database each day and that there is a variable quantity of these backups each day.

In this preceding image we see more details about these full backups that have been occurring. We know that the backups are not coming from the SQL Service account. We also know that these are not SQL backups but rather they are coming from an outside entity that is causing the backups to be sent to a virtual device. We know a virtual device is the target due to the GUID for the BackupPath instead of an actual file path.

If I take this a step further and try to trace it via an Extended Events session, I can further confirm that the backups are from an outside entity and that it appears to be something that is using the windows shadow copy service – or something like that. Let’s look closer at that.

League of Shadows

Here we see the properties for the volume shadow copy service for all volumes (including those involved with SQL Server). Intriguing to see that despite the service being disabled for every volume, there appears to still be something causing there to be storage consumed towards the VSS quota. This supports that something is causing the backups to be performed via VSS somehow but it is not the actual Windows service because it is disabled for each volume.

Let’s investigate further. Time to break out some command line queries.

Running that command, I can compare all scheduled tasks and their run times against the actual backups that were occurring and determine if any might be related to the backups. As it turned out, none of the tasks in Task Scheduler ran at any time remotely close to these backups.

Next, let’s look at the VSS configurations more closely. I can query the providers, shadows and shadowstorage.

Looking at the providers first to determine if there is something there, we might be introduced to something like this output.

Maybe there is an aha for you at this moment. Two services are certainly worth investigating: “CommVault VSS Hardware Provider Service” and “Galaxy VSS Provider Service” (both are related to CommVault). In this particular case, we exhausted those avenues and were able to determine that neither was related to this issue.

Let’s try the shadows.

A couple of things of note here. The timestamps from these vss shadows are consistent with FULL database backup time stamps in SQL Server. Next, we see that these are designed to serve as an “ApplicationRollback” and Differential backup. So, something is taking an application consistent differential backup every so often for some reason yet unknown. Despite the VSS differential backup, it is not a differential backup inside SQL Server – it is a FULL backup. This configuration can be detrimental to your recovery plan if you are unaware of it (some ramifications here).

So, what is causing this? Well, one more clue actually helped us figure it out. The issues started on August 20th. On that day, a new Antivirus software was rolled out to the servers. This software, unbeknownst to the sysadmins, was taking VSS snaps to try and protect against ransomware – supposedly. I have to say, that is a pretty dumb thing to do to a database server. A legitimate backup / recovery plan would suffice in the case of a hijacked OS and without the added burden of the sanpshots from the AV software nor the added stress of having something screw with the backup/recovery plan.

This is a prime example of using a wrecking ball to hammer in a screw. All in the guise of security and protection. Please stop the insanity.

Making matters more difficult is the fact that the GUIDS provided by the VSS storage and what we see listed as the virtual device in SQL Server do not correlate directly to each other. You have to base your assertions off of timestamps.

If your sysadmins insist on having a tool (whether it be Antivirus or a backup solution) that causes VSS snaps to occur which could foul up the database recovery plan, then I recommend you insist on having access to that tool.

Wrapping it Up

I am not a fan of tools that interfere with the database recovery plans that may be requisite to properly meet RTO and RPO. At best these phantom backups cause undue headache in troubleshooting. At worst, they make it impossible to recover in the event of a database related disaster. Sadly, the database is regarded as the red-headed step child and these tools ignore best practices for the database world. Forcing a database server to fit in the same mold as a file or print server is flat out absurd and these tools make no effort to educate the sysadmins of the piss-poor practices they are forcing down the throat of your database server.

In this article I showed various tools in the command line as well as within SQL Server to troubleshoot these problematic backups. One of the tools was an XEvent Session. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Finding Data Gaps

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: July 11, 2019

Quality data is a goal we all strive to attain. Nobody wants to have bad data. Yet, despite our best efforts, the problem of bad data has its way of presenting itself over and over again.

One fairly common indicator of bad data is maybe an absence of data. Maybe you have a table in the database with what is expected to be contiguous data, yet somehow there appears to be an absence of data here or there. Maybe there is an absence of the contiguous data and it hasn’t been noticed.

This phenomenon is also known as the gaps and islands issue in the data realm. How do you know if you have an issue or not? This article will help you with a sample query that can find those gaps of missing data. From there, your journey is just beginning.

Finding the Gaps

I have this database that I use for mentoring DBAs of various levels (help desk on up). In the database, I have a schema dedicated for daily trivia. Since the intent is for it to be a daily trivia question, I expect to have a question for every day during the time interval in question. Every now and again, however, I forget to upload a question or I mistakenly believe I have already loaded a question for the day. Typically, I try to do this in advance so as to not be scrambling the last minute to get a question built for the email to pick it up.

Due to the factors just explained, I end up with gaps in my daily question data. The question now becomes how to find that I have gaps. I have a couple of queries here that will save you plenty of time if you follow the same principles applied here.

First let’s start with a bit of a tricky version of the query that builds a numbers table and then spits out all of the dates that are missing a question (only for the range of data within the table that is).

This particular numbers table trick has been around for quite a while and runs fast enough. However, if you are performance nut, and for this particular case, there is a method that is a wee bit better (usually the dynamic numbers table just shown is sufficiently fast).

This method is the much maligned recursive CTE method. In my testing it runs consistently faster with a lower memory grant but does cause a bit more IO to be performed. Some trade-off to be considered there. Both queries are returning the desired data-set which happens to be my missing question days. Only, I have added an extra output in the second query to let me know the day of the week that the missing question occurred on. Maybe I forgot to enter it because it was a weekend day or maybe I opted to not create one at all because the day lands on a Holiday. Let’s take a small peek at the results.

That is a pretty good swath of missing data there. If you look closely, not all days from November 10th through December 1st are missing, just a handful. Alas, it looks like I have some work cut out for me because I have some weekdays, weekends, and Holidays in there as well as nearly the entire month of December. Finding the gaps is the easy part once you have a solid query to start with and I believe this query is just the right tool for that.

What if I wanted to check for a rolling period of time instead of just checking for valid data between a min and max range of data already in the table? That is easy enough as well with some minor tweaks. Let’s look at one more query option here.

This third code snippet illustrates a slight variation over the previous two in that it utilizes the Table Value Constructor to build the numbers table. All in all, it runs with blazing high speed and produces the results I am seeking – but for a rolling year instead of a static range of data.

The Wrap

Gaps and islands will occur in your data. When it does, you have your choice of three high performing scripts to help you investigate where the gaps and islands exist in your data.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Event Files on Linux

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, just about every SQL Server Data professional is being asked to get out of their comfort zone and talk about Linux (there are some out there that are very comfortable with Linux, but the vast majority are not 😉 ).

This is a topic that is near and dear to the heart of our organizer Tracy Boggiano (b | t). In Tracy’s invite, she says “While I know it takes a while to adopt new technologies; I was wondering what it would take for people to adopt SQL on Linux.  Alternating I’m offering up for you to blog about what everyone should know when working with SQL on Linux or anything else related to SQL running on Linux.

That pretty much leaves the door wide open, right?

Event Files

For the most part, things work the way you might expect them to work in windows – except it is on Linux. Sure some things are different, but SQL Server itself, is largely the same. That similarity, for the most part, boils all the way down into Extended Events (XEvents) as well. There is one significant divergence, however. And that is when it comes to specifying your event_file target for a new session. Let’s dive into that just a bit.

Let’s take a common setup for an XEvent session.

This will fail before the query really even gets out of the gate. Why? The proc xp_create_subdir cannot create the directory because it requires elevated permissions. The fix for that is easy enough – grant permissions to write to the Database directory after creating it while in sudo mode. I will get to that in just a bit. Let’s see what the errors would look like for now.

Msg 22048, Level 16, State 1, Line 15
xp_create_subdir() returned error 5, ‘Access is denied.’
Msg 25602, Level 17, State 23, Line 36
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 5: ‘Access is denied.
‘ while creating the file ‘C:\Database\XE\PREEMPTIVE_OS_PIPEOPS_0_132072025269680000.xel’.

Let’s resolve the folder issue. I will create the Database folder (from a console session on the server), and then I will take ownership of that folder.

From there, it is also advisable to grant permissions to this folder to the SQL group via the chgrp command. Once done, re-running the entire session creation will magically work – including that windows based create subdir proc.

Alternative #2

Maybe we just want to do things via the default method. If so, we can do this and it just works.

Alternative #3

And of course, there is always this option. Unfortunately, this means keeping multiple session scripts in source control in order to maintain the differences between Windows and Linux as illustrated in these two examples.

With all three sessions now on the server, I can query my target paths to confirm the correct paths have been used.

This produces the following results for me.

Perfect, everything looks to be working and properly configured. Well, except for that session that is using the default directory – gag.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to create an Event Session on SQL Server that is running on Linux. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Is That Database in Use?

Does anybody know who or what is using this database? Better yet, is the database being used by anything at all? Have you ever asked those types of questions of your staff in an environment you have inherited?

As silly as it might sound, these are legitimate questions. It is not uncommon to have a relic database hanging around after an application has long since been retired. The database just continues to live on via life support in the dark recesses of your database server. Everybody has forgotten about it yet it consumes precious resources to do little more than exist.

But how do you go about discovering if the database is truly required or if it is just chewing up space and cpu cycles? The answer is to start by discovering if there is any activity in the database at all. Extended Events is an ideal method to audit your database and discover if there is truly activity on the database or not.

DB In Use?

Depending on the version of SQL Server you are using, the method is going to be a little bit different. Each method I use still relies on Extended Events (XE), they just require some slight changes and subtle nuances (particularly for those pesky 2008 and R2 instances out there). Let’s focus on the 2008 method this time around and we will revisit the updated version in a later article.

2008/R2

Let’s first start with the XE session that works well for discovering the frequency of use a database may be encountering.

Inline with the script, I left some interesting notes. First, the note in the target section is of particular interest because it lays out a requirement for this kind of target in 2008/R2. When using the action source type, then the package name must be specified along with the action in two-part naming format.

Next, in the actions, I have a note about sqlserver.database_context. This particular action is deprecated in later versions. Do I need it in this session? No. I put it in there for additional troubleshooting/exploration.

Lastly, I have a note about the event name. I chose database_transaction_begin because this is fired when a database transaction starts. I don’t really care if the transaction completes. I just want to know if an attempt was made to use that database and the data therein.

If I comment out the deprecated action (database_context), I could actually use this session on later versions (as constructed) of SQL Server. It doesn’t matter that the asynchronous_bucketizer has been replaced by the histogram target, the session will still create and work properly. SQL Server knows to automatically update the deprecated targets with the appropriate target when creating an XE Session on a server.

Since this is 2008/R2, we have to use TSQL in order to parse the data. The following query will do that for us!

After executing that query, I will see results listing the activity of each database since the session was started. Here’s an example.

Here we can see there are indeed some databases that are still in use on this server. If we are looking to retire the instance, or migrate relevant databases to a new server, we have better information about how to go about planning that work. Based on this data, I would be able to retire the ProtossZealot database but would be required to do something with the ZergRush and BroodWar databases. In the case of the ProtossZealot database that is not being used, we now have evidence to present back to the team that the database is not used. It is now up to those insisting on keeping it around to justify its existence and document why it must remain in tact.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrates how to use Extended Events to determine if a database is being used by someone or something. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Join a List of Values

Comments: 1 Comment
Published on: July 2, 2019

There are multiple ways of retrieving data from a table when you have a list of values. Sometimes, the method is straight-forward and sometimes we just can’t see the right solution.

Some of the methods to retrieve data from a table (based on a list of values) might include one or multiple of the following: join to another table, throwing an in clause at it, dumping the values into a temp table, sub-queries, and a table value constructor.

Let’s focus on the table value constructor. This one seems to trip a few people up from time to time. First off, what is it?

Table Value Constructor

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

So, this virtual table contains a set of values that we define. Let’s go ahead and take a look at an example set of data that we can cultivate into a table value constructor.

Now, let’s say that I want to get all of the trace events that might correlate to those specific values, all I need to do is add this list of values to a query in the appropriate place. First, let’s see what it looks like when I add it via a CTE.

In this case, I have thrown the table value constructor into a CTE. There could be a few good reasons to do it this way such as it being easier to read or maybe some corporate style policy. For me, I like the readability and the tight grouping of the join clauses without needing to scroll. Running the query in this case would produce results similar to the following.

Now, what if I really prefer to not use a CTE? Easy enough, let’s look at this next example. I will use the same list of values as I search for various wait_types.

Notice here that I have a little bit of a different query. I only have one table to join to after the table value constructor. That said, it works just the same as the CTE example. Once the data is constructed, I alias it like a table, then I define the column names. This query will now return results similar to the following.

Both of these examples have been pretty easy in building a single attribute array for the table value constructor. What if we had the need for multiple columns in our list of data?

Notice the addition of an extra value (separated by a comma) within each set of parenthesis and the additional column definition added after the alias for the table value constructor. This is a pretty simple modification. Upon execution, I can expect results such as the following.

The Wrap

Being able to take a list of data and construct it into a virtual table for further querying and data analysis is a pretty cool thing to do. The setup and syntax are fairly simple – it just takes a bit of practice and effort.

This article demonstrated three different examples of using a table value constructor to join a list of data to other tables. See what other kinds of examples you can come up with and share them.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

«page 1 of 57

Calendar
December 2019
M T W T F S S
« Nov    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Welcome , today is Sunday, December 15, 2019