Audit Logons with Extended Events

Some time ago, I wrote an article for SQL Server 2008 to help determine the use of the server since SQL Server 2008 was reaching End Of Life. In that article, I shared a reasonable use of server side trace to capture all of the logon events to the server. Afterall, you have to find out the source of connections and who is using the server if you need to migrate it to a newer SQL Server version. You can read that article here.

Soon after, from various sources, I received requests on how to perform a logon audit using the more preferred, robust, venerable, awesome tool called Extended Events (XEvents). In response, I would share a login audit session to each person. In this article, I will share my login audit solution and give a brief explanation. I use a solution like this on more than 90% of my client servers and I find it highly useful.

Auditing Logons

The Events to capture SQL Server logon/logoff activities were not a part of the original release of XEvents in 2008. The requisite events did not become available until SQL Server 2012. Why mention SQL Server 2008 at all given it has reached its End of Life you may ask? Well, as luck would have it, there are still plenty of 2008/R2 instances out there in the world that are yet to be upgraded still. So, it is useful to continually point it out. This session, and this information, does NOT apply to anything prior to SQL Server 2012.

The events in XEvents that we need happen to be called: sqlserver.login and sqlserver.logout. Here is a little more info on those events along with a query that can retrieve the details being shown here.

As shown in the preceding image, there are two events of interest that are needed for auditing logon events – sort of. These events are only useful to capture the successful connection (or connection pooling reuse connections) and the logoff events. To take this a step further and capture the failed logins, we can also use an Event that was available in 2008, error_reported. The use of the logout event would be an optional event in this case but could be of use during the troubleshooting under certain circumstances.

Focusing on the login and error_reported events, I have two very useful XEvent Sessions I like to use for my client servers. First part of the audit logon solution is the audit of successful logins with this XEvent Session.

Next, I like to use a separate session for the failed logins. This makes it easy to keep the audit files separate and only share the necessary files to others (e.g. only the failed logins instead of giving them both successful and failed which may be way too much information). In addition, I like this method because it makes searching the audit files easier and more focused.

In this second session, I have the predicate defined such that I will only get the failed login events. Funny thing about this, I can often catch 3rd party vendors trying to login to client systems in very odd ways (including putting the password into the user name box which in turn causes the password to be stored in clear text in the error log).

Could these sessions be combined into a single all-purpose session? Sure! That is up to your discretion. I prefer the separate sessions for ease of management and review.

The Extra Mile

Let’s say we wanted to go ahead and add the logout event to our session, we can then proceed with a slight modification to the successful login session so that it would look something like the following.

And, to show what might occur with this session, I can see data similar to this for the logout event.

See how easy that is!

The Wrap

This article has shown how to audit the logon events for SQL Server 2012 and beyond through the use of XEvents. It also happens to be an excellent follow up to another recent article – here. An essential function of any DBA is to have a clear understanding of the activity (logons) that is occurring on the servers. This audit session will help you do exactly that!

Through the power of XEvents, we can accomplish a great many things and reach extensive insights into our database footprint. This post is just one of many in the XE Series, of which you can read more – here.

In addition, this post is similar to many in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

CRM Data Source Connection Error

Working through some security modernization recently with a client I ran into a fun little nugget. What we were trying to accomplish was to make the service accounts being used more secure through the use of Managed Service Accounts.
Making the change was fairly easy and all of our services seemed to start up swimmingly. SQL Server was running perfectly fine. Even when we got to SSRS and applied the change to the Service account via SSRS Configuration Manager, things went smooth.

After ensuring the services all started came the task of validation. From a sysadmin perspective we often figure the validation process means looking through the tools we know and sometimes we don’t know what to do via the application to validate functionality. For this we more or less rely on the end-users. While waiting for the end-users to perform their validations, we resort to the tools known best to us – SSMS etc.

In the case of testing reports, one may attempt to use the Report Manager url (or report server url). When opening reports, you may see that everything is working correctly through that tool. So, you continue to wait for the end-user to validate. Eventually they say it all looks good only to report back a week later that nothing is working and now you are stuck trying to figure out what to do to resolve the problem.

SSRS Service Account Needs PrivUserGroup

This is particularly interesting given everything worked just fine before the service account change and given that the only thing that changed was the service account (via Reporting Services Configuration Manager). Nothing changed with the execution account (an execution account was not actually employed in this configuration at any point).

We also determined that nothing was necessary to be changed with the CRM’s “SRS Data Connector”. What is the next best thing? Well, time to take a look into the logs and try to find some clues there. Perusing the logs, I was able to find a couple of key indicators. Here are the two enlightening errors.

First error:

“Cannot create a connection to data source ‘CRM’.”

Better, more verbose exception that broke the case wide open:

“Immediate caller <SQL Server Reporting Services Service Account> has insufficient privilege to run report as user <SID>”

Googling for these errors revealed a lot of information about the errors in the event an Execution Account for SSRS was being used. That scenario didn’t apply but it did give a hint at what to look at next. Comparing the previous service accounts permissions in Active Directory to the new service accounts permissions we found the “PrivUserGroup” permission was missing. This permission was the exact permission mentioned in all of the other articles. Adding that permission immediately solved the issue for the end-users and their ability to run reports via the CRM application.

The Wrap

This article takes us to the edge with a couple of CRM related errors after changing the service account to a more secure Managed Service Account. Despite the CRM reports working properly within Report Manager (via SSRS), the reports would fail in CRM.

Comparing the permissions for the old service account versus the new service account shed a bright light on the problem indicating a missing permission. PrivUserGroup is essential to proper report functionality and should be added to either your execution account or your SSRS service account.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Ad Hoc Queries Disabled

DBAs and ad hoc queries sometimes go together like oil and water. While we may prefer to avoid the ad hoc queries, sometimes it is the best method to achieve a task.

What exactly is an ad hoc query though? An ad hoc query is a query that should serve a single use purpose (not always the case), is routinely unplanned and quite possibly was untested (again, not always the case). Sometimes, however, the definitions of ad hoc don’t fully apply to queries in SQL Server. One such case is through the use of dynamic type queries and queries issued to linked servers via openrowset.

In my experience, queries used against a linked server and using the openrowset functionality is typically more of a prepared statement. However, it is treated as an ad hoc query. As such, you may encounter an error you may not have been expecting.

Msg 15281, Level 16, State 1, Line 161
SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, search for ‘Ad Hoc Distributed Queries’ in SQL Server Books Online.

This article will help show how to resolve for this error so your queries may go on as intended.

Ad Hoc

Let’s first take a look at a pretty typical type of query I use routinely to check various things on my lab servers.

This is a query that I use (or something like it) to retrieve various Extended Events data from different lab servers. Notice, I use the openrowset method in order to query my linked server. If I do not have the server configured for ad hoc distributed queries then I will receive the error previously noted. The solution for that error is to enable the setting. We can do that via the following query.

Once enabled, then the openrowset queries will work across linked servers. This needs to be enabled on the server that is local in order to send the ad hoc query across to the remote server. With the setting now enabled, running the query that threw the error (from above) now returns the follow results.

The Wrap

This article took a look at an error that may occur depending on your use of linked servers and the use of openrowset. Documentation states that this is something that should be done infrequently and alludes to the issue being resolved through the use of linked servers. In my case, this crops up when using openrowset to query my linked servers. I find that using openrowset to query the linked server is far more reliable, efficient, and better performing. YMMV.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Configuration Manager is Corrupt

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: December 17, 2019

The longer one works in the IT industry, the more likely it becomes to experience a broken system in some capacity. One minute it is working and the next it is not. This is true even of systems that are properly maintained and secured. Stuff happens and pieces break here and there.

I had the distinct pleasure of experiencing this exact problem. One of the essential tools in the DBA tool chest is the SQL Server Configuration Manager. Usually everything works fine here – until it doesn’t. This is a critical component to a well configured server. If nothing else, a DBA should be well enough versed to know that services for SQL Server should be bounced and configured from here. If you were unaware of that requirement – you now know.

Lesser known is the fact that this tool uses WMI to view or change various server settings. WMi allows for a uniform method for the interaction with these services and configurations. Almost all of the time, that is a seamless interaction and the DBA is none the wiser. And then it doesn’t work and the DBA grows more grey hairs.

WMIPY Issues

So you come along one day to validate some service configurations for the SQL Server services and instead of the SQL Server Configuration Manager opening as expected, you are greeted with the following error message.

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager. Invalid namespace [0x8004100e]

Your first thought might be that the issue is being caused by UAC. So you try to run the SQL Server Configuration Manager as administrator. Sadly, that attempt resulted in the exact some error.

What to do? What to do?

Well, the problem is that the WMI provider has become corrupt or removed somehow. This problem can occur whether you have uninstalled a SQL Server instance or not (e.g. some documentation suggests the root cause for this problem is only if you uninstall a SQL Server instance). For me, this issue has arisen many times and never has an Instance been uninstalled. No worries either way because the solution is simple – run a statement similar to the following.

There are two caveats with this script execution: 1) you must run this from an elevated command prompt and 2) you must replace “VERSIONNUMBER” with the correct version number matching your version of SQL Server from the following table.

SQLVersionName

SQLVersionNum

Microsoft SQL Server 2005

90

Microsoft SQL Server 2008 R2

100

Microsoft SQL Server 2008

100

Microsoft SQL Server 2012

110

Microsoft SQL Server 2014

120

Microsoft SQL Server 2016

130

Microsoft SQL Server 2017

140

Microsoft SQL Server 2019

150

If you execute this command and see an error with this code in it (0x80041003) then you have not executed the command with adequate permissions or elevation. You may need to find an administrator for the box and/or execute from an elevated command prompt (e.g. “Run As Administrator).

The Wrap

This article took a look at a rather annoying little error that can crop up out of the blue when trying to open the SQL Server Configuration Manager. Not only do we get to see that error and the cause, but we also get to see an extremely easy fix for the problem. Employing this easy fix will have you back in business administering the SQL Server services in the proper fashion in very short order.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Finding Data Gaps

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: July 11, 2019

Quality data is a goal we all strive to attain. Nobody wants to have bad data. Yet, despite our best efforts, the problem of bad data has its way of presenting itself over and over again.

One fairly common indicator of bad data is maybe an absence of data. Maybe you have a table in the database with what is expected to be contiguous data, yet somehow there appears to be an absence of data here or there. Maybe there is an absence of the contiguous data and it hasn’t been noticed.

This phenomenon is also known as the gaps and islands issue in the data realm. How do you know if you have an issue or not? This article will help you with a sample query that can find those gaps of missing data. From there, your journey is just beginning.

Finding the Gaps

I have this database that I use for mentoring DBAs of various levels (help desk on up). In the database, I have a schema dedicated for daily trivia. Since the intent is for it to be a daily trivia question, I expect to have a question for every day during the time interval in question. Every now and again, however, I forget to upload a question or I mistakenly believe I have already loaded a question for the day. Typically, I try to do this in advance so as to not be scrambling the last minute to get a question built for the email to pick it up.

Due to the factors just explained, I end up with gaps in my daily question data. The question now becomes how to find that I have gaps. I have a couple of queries here that will save you plenty of time if you follow the same principles applied here.

First let’s start with a bit of a tricky version of the query that builds a numbers table and then spits out all of the dates that are missing a question (only for the range of data within the table that is).

This particular numbers table trick has been around for quite a while and runs fast enough. However, if you are performance nut, and for this particular case, there is a method that is a wee bit better (usually the dynamic numbers table just shown is sufficiently fast).

This method is the much maligned recursive CTE method. In my testing it runs consistently faster with a lower memory grant but does cause a bit more IO to be performed. Some trade-off to be considered there. Both queries are returning the desired data-set which happens to be my missing question days. Only, I have added an extra output in the second query to let me know the day of the week that the missing question occurred on. Maybe I forgot to enter it because it was a weekend day or maybe I opted to not create one at all because the day lands on a Holiday. Let’s take a small peek at the results.

That is a pretty good swath of missing data there. If you look closely, not all days from November 10th through December 1st are missing, just a handful. Alas, it looks like I have some work cut out for me because I have some weekdays, weekends, and Holidays in there as well as nearly the entire month of December. Finding the gaps is the easy part once you have a solid query to start with and I believe this query is just the right tool for that.

What if I wanted to check for a rolling period of time instead of just checking for valid data between a min and max range of data already in the table? That is easy enough as well with some minor tweaks. Let’s look at one more query option here.

This third code snippet illustrates a slight variation over the previous two in that it utilizes the Table Value Constructor to build the numbers table. All in all, it runs with blazing high speed and produces the results I am seeking – but for a rolling year instead of a static range of data.

The Wrap

Gaps and islands will occur in your data. When it does, you have your choice of three high performing scripts to help you investigate where the gaps and islands exist in your data.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Join a List of Values

Comments: 1 Comment
Published on: July 2, 2019

There are multiple ways of retrieving data from a table when you have a list of values. Sometimes, the method is straight-forward and sometimes we just can’t see the right solution.

Some of the methods to retrieve data from a table (based on a list of values) might include one or multiple of the following: join to another table, throwing an in clause at it, dumping the values into a temp table, sub-queries, and a table value constructor.

Let’s focus on the table value constructor. This one seems to trip a few people up from time to time. First off, what is it?

Table Value Constructor

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

So, this virtual table contains a set of values that we define. Let’s go ahead and take a look at an example set of data that we can cultivate into a table value constructor.

Now, let’s say that I want to get all of the trace events that might correlate to those specific values, all I need to do is add this list of values to a query in the appropriate place. First, let’s see what it looks like when I add it via a CTE.

In this case, I have thrown the table value constructor into a CTE. There could be a few good reasons to do it this way such as it being easier to read or maybe some corporate style policy. For me, I like the readability and the tight grouping of the join clauses without needing to scroll. Running the query in this case would produce results similar to the following.

Now, what if I really prefer to not use a CTE? Easy enough, let’s look at this next example. I will use the same list of values as I search for various wait_types.

Notice here that I have a little bit of a different query. I only have one table to join to after the table value constructor. That said, it works just the same as the CTE example. Once the data is constructed, I alias it like a table, then I define the column names. This query will now return results similar to the following.

Both of these examples have been pretty easy in building a single attribute array for the table value constructor. What if we had the need for multiple columns in our list of data?

Notice the addition of an extra value (separated by a comma) within each set of parenthesis and the additional column definition added after the alias for the table value constructor. This is a pretty simple modification. Upon execution, I can expect results such as the following.

The Wrap

Being able to take a list of data and construct it into a virtual table for further querying and data analysis is a pretty cool thing to do. The setup and syntax are fairly simple – it just takes a bit of practice and effort.

This article demonstrated three different examples of using a table value constructor to join a list of data to other tables. See what other kinds of examples you can come up with and share them.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Detail Job History – Back to Basics

Recently, I covered the need to understand job failure frequency and knowing the value of your SQL Agent jobs. You can read the specifics in the article – here.

Sometimes, just knowing the frequency of the job failure is good enough. Sometimes, more information is helpful. Having more information is particularly helpful when meeting with the business to discuss the validity of the job in question.

What do you do in times like this? The most basic answer to that question is – get more data. But that barely scratches the surface. The real question being asked there is how do you go about gathering that data?

There are two methods to gather the data – the hard way and the easy way. Do you like to work hard? Or would you rather work more efficiently?

Efficiency Matters

As was discussed in the previous article, I prefer to do things just a little bit less manually where possible. The consistency of a script matters, but it also is just so much faster than doing things the hard, manual, iterative way. So let’s build a little bit on the script from the previous article.

And here is a sample of the output.

With this script, I have the ability to quick show which step is failing, what the command is for that step, what kind of process is running on that step, any passwords (in the event of an SSIS password), and of course the failure frequency. This is golden information at the fingertips. There is no need to click through the GUI to gather this information. You can get it quickly and easily in one fell swoop.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 1 of 3

Calendar
June 2020
M T W T F S S
« May    
1234567
891011121314
15161718192021
22232425262728
2930  

Welcome , today is Saturday, June 6, 2020