Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Finding Data Gaps

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: July 11, 2019

Quality data is a goal we all strive to attain. Nobody wants to have bad data. Yet, despite our best efforts, the problem of bad data has its way of presenting itself over and over again.

One fairly common indicator of bad data is maybe an absence of data. Maybe you have a table in the database with what is expected to be contiguous data, yet somehow there appears to be an absence of data here or there. Maybe there is an absence of the contiguous data and it hasn’t been noticed.

This phenomenon is also known as the gaps and islands issue in the data realm. How do you know if you have an issue or not? This article will help you with a sample query that can find those gaps of missing data. From there, your journey is just beginning.

Finding the Gaps

I have this database that I use for mentoring DBAs of various levels (help desk on up). In the database, I have a schema dedicated for daily trivia. Since the intent is for it to be a daily trivia question, I expect to have a question for every day during the time interval in question. Every now and again, however, I forget to upload a question or I mistakenly believe I have already loaded a question for the day. Typically, I try to do this in advance so as to not be scrambling the last minute to get a question built for the email to pick it up.

Due to the factors just explained, I end up with gaps in my daily question data. The question now becomes how to find that I have gaps. I have a couple of queries here that will save you plenty of time if you follow the same principles applied here.

First let’s start with a bit of a tricky version of the query that builds a numbers table and then spits out all of the dates that are missing a question (only for the range of data within the table that is).

This particular numbers table trick has been around for quite a while and runs fast enough. However, if you are performance nut, and for this particular case, there is a method that is a wee bit better (usually the dynamic numbers table just shown is sufficiently fast).

This method is the much maligned recursive CTE method. In my testing it runs consistently faster with a lower memory grant but does cause a bit more IO to be performed. Some trade-off to be considered there. Both queries are returning the desired data-set which happens to be my missing question days. Only, I have added an extra output in the second query to let me know the day of the week that the missing question occurred on. Maybe I forgot to enter it because it was a weekend day or maybe I opted to not create one at all because the day lands on a Holiday. Let’s take a small peek at the results.

That is a pretty good swath of missing data there. If you look closely, not all days from November 10th through December 1st are missing, just a handful. Alas, it looks like I have some work cut out for me because I have some weekdays, weekends, and Holidays in there as well as nearly the entire month of December. Finding the gaps is the easy part once you have a solid query to start with and I believe this query is just the right tool for that.

What if I wanted to check for a rolling period of time instead of just checking for valid data between a min and max range of data already in the table? That is easy enough as well with some minor tweaks. Let’s look at one more query option here.

This third code snippet illustrates a slight variation over the previous two in that it utilizes the Table Value Constructor to build the numbers table. All in all, it runs with blazing high speed and produces the results I am seeking – but for a rolling year instead of a static range of data.

The Wrap

Gaps and islands will occur in your data. When it does, you have your choice of three high performing scripts to help you investigate where the gaps and islands exist in your data.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Event Files on Linux

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, just about every SQL Server Data professional is being asked to get out of their comfort zone and talk about Linux (there are some out there that are very comfortable with Linux, but the vast majority are not 😉 ).

This is a topic that is near and dear to the heart of our organizer Tracy Boggiano (b | t). In Tracy’s invite, she says “While I know it takes a while to adopt new technologies; I was wondering what it would take for people to adopt SQL on Linux.  Alternating I’m offering up for you to blog about what everyone should know when working with SQL on Linux or anything else related to SQL running on Linux.

That pretty much leaves the door wide open, right?

Event Files

For the most part, things work the way you might expect them to work in windows – except it is on Linux. Sure some things are different, but SQL Server itself, is largely the same. That similarity, for the most part, boils all the way down into Extended Events (XEvents) as well. There is one significant divergence, however. And that is when it comes to specifying your event_file target for a new session. Let’s dive into that just a bit.

Let’s take a common setup for an XEvent session.

This will fail before the query really even gets out of the gate. Why? The proc xp_create_subdir cannot create the directory because it requires elevated permissions. The fix for that is easy enough – grant permissions to write to the Database directory after creating it while in sudo mode. I will get to that in just a bit. Let’s see what the errors would look like for now.

Msg 22048, Level 16, State 1, Line 15
xp_create_subdir() returned error 5, ‘Access is denied.’
Msg 25602, Level 17, State 23, Line 36
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 5: ‘Access is denied.
‘ while creating the file ‘C:\Database\XE\PREEMPTIVE_OS_PIPEOPS_0_132072025269680000.xel’.

Let’s resolve the folder issue. I will create the Database folder (from a console session on the server), and then I will take ownership of that folder.

From there, it is also advisable to grant permissions to this folder to the SQL group via the chgrp command. Once done, re-running the entire session creation will magically work – including that windows based create subdir proc.

Alternative #2

Maybe we just want to do things via the default method. If so, we can do this and it just works.

Alternative #3

And of course, there is always this option. Unfortunately, this means keeping multiple session scripts in source control in order to maintain the differences between Windows and Linux as illustrated in these two examples.

With all three sessions now on the server, I can query my target paths to confirm the correct paths have been used.

This produces the following results for me.

Perfect, everything looks to be working and properly configured. Well, except for that session that is using the default directory – gag.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to create an Event Session on SQL Server that is running on Linux. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Is That Database in Use?

Does anybody know who or what is using this database? Better yet, is the database being used by anything at all? Have you ever asked those types of questions of your staff in an environment you have inherited?

As silly as it might sound, these are legitimate questions. It is not uncommon to have a relic database hanging around after an application has long since been retired. The database just continues to live on via life support in the dark recesses of your database server. Everybody has forgotten about it yet it consumes precious resources to do little more than exist.

But how do you go about discovering if the database is truly required or if it is just chewing up space and cpu cycles? The answer is to start by discovering if there is any activity in the database at all. Extended Events is an ideal method to audit your database and discover if there is truly activity on the database or not.

DB In Use?

Depending on the version of SQL Server you are using, the method is going to be a little bit different. Each method I use still relies on Extended Events (XE), they just require some slight changes and subtle nuances (particularly for those pesky 2008 and R2 instances out there). Let’s focus on the 2008 method this time around and we will revisit the updated version in a later article.

2008/R2

Let’s first start with the XE session that works well for discovering the frequency of use a database may be encountering.

Inline with the script, I left some interesting notes. First, the note in the target section is of particular interest because it lays out a requirement for this kind of target in 2008/R2. When using the action source type, then the package name must be specified along with the action in two-part naming format.

Next, in the actions, I have a note about sqlserver.database_context. This particular action is deprecated in later versions. Do I need it in this session? No. I put it in there for additional troubleshooting/exploration.

Lastly, I have a note about the event name. I chose database_transaction_begin because this is fired when a database transaction starts. I don’t really care if the transaction completes. I just want to know if an attempt was made to use that database and the data therein.

If I comment out the deprecated action (database_context), I could actually use this session on later versions (as constructed) of SQL Server. It doesn’t matter that the asynchronous_bucketizer has been replaced by the histogram target, the session will still create and work properly. SQL Server knows to automatically update the deprecated targets with the appropriate target when creating an XE Session on a server.

Since this is 2008/R2, we have to use TSQL in order to parse the data. The following query will do that for us!

After executing that query, I will see results listing the activity of each database since the session was started. Here’s an example.

Here we can see there are indeed some databases that are still in use on this server. If we are looking to retire the instance, or migrate relevant databases to a new server, we have better information about how to go about planning that work. Based on this data, I would be able to retire the ProtossZealot database but would be required to do something with the ZergRush and BroodWar databases. In the case of the ProtossZealot database that is not being used, we now have evidence to present back to the team that the database is not used. It is now up to those insisting on keeping it around to justify its existence and document why it must remain in tact.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrates how to use Extended Events to determine if a database is being used by someone or something. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Join a List of Values

Comments: 1 Comment
Published on: July 2, 2019

There are multiple ways of retrieving data from a table when you have a list of values. Sometimes, the method is straight-forward and sometimes we just can’t see the right solution.

Some of the methods to retrieve data from a table (based on a list of values) might include one or multiple of the following: join to another table, throwing an in clause at it, dumping the values into a temp table, sub-queries, and a table value constructor.

Let’s focus on the table value constructor. This one seems to trip a few people up from time to time. First off, what is it?

Table Value Constructor

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

So, this virtual table contains a set of values that we define. Let’s go ahead and take a look at an example set of data that we can cultivate into a table value constructor.

Now, let’s say that I want to get all of the trace events that might correlate to those specific values, all I need to do is add this list of values to a query in the appropriate place. First, let’s see what it looks like when I add it via a CTE.

In this case, I have thrown the table value constructor into a CTE. There could be a few good reasons to do it this way such as it being easier to read or maybe some corporate style policy. For me, I like the readability and the tight grouping of the join clauses without needing to scroll. Running the query in this case would produce results similar to the following.

Now, what if I really prefer to not use a CTE? Easy enough, let’s look at this next example. I will use the same list of values as I search for various wait_types.

Notice here that I have a little bit of a different query. I only have one table to join to after the table value constructor. That said, it works just the same as the CTE example. Once the data is constructed, I alias it like a table, then I define the column names. This query will now return results similar to the following.

Both of these examples have been pretty easy in building a single attribute array for the table value constructor. What if we had the need for multiple columns in our list of data?

Notice the addition of an extra value (separated by a comma) within each set of parenthesis and the additional column definition added after the alias for the table value constructor. This is a pretty simple modification. Upon execution, I can expect results such as the following.

The Wrap

Being able to take a list of data and construct it into a virtual table for further querying and data analysis is a pretty cool thing to do. The setup and syntax are fairly simple – it just takes a bit of practice and effort.

This article demonstrated three different examples of using a table value constructor to join a list of data to other tables. See what other kinds of examples you can come up with and share them.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

page 1 of 1

Calendar
July 2019
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Thursday, November 14, 2019