When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Life Support 2008 – Audit Logons

With SQL Server 2008 and 2008R2 limping along and becoming terminally ill (End of Life Support was July 9, 2019), it should come as no surprise that it is recommended to migrate/upgrade affected databases/servers to newer technology.

Planning to upgrade/migrate requires a fair amount of prep work. Some of that prep work involves auditing your server for any users that may still be using the instance.

Where does one even begin in order to audit those logon events in SQL 2008 or 2008R2? Some may say to use a SQL Server Audit but that feature is an Enterprise only feature for 2008 and R2. If we were on a newer version of SQL Server, Extended Events would be the easy choice. Unfortunately, XE was not adequately mature on 2008 or R2. XE just doesn’t have the appropriate events to use in these versions of SQL Server. What’s left?

How about a server side trace? Yes, that was a bit difficult to say. Server side trace does have just the right events for us to use to discover who is using the server and which principals can be eradicated. A big benefit here is that a server side trace does not come with a service outage as would be required for other methods.

Server Side Trace

How do we go about creating a trace to capture these events? It is common knowledge that using a script to create a server side trace is not very intuitive. It is also not very plausible to run a Profiler session and leave it running for weeks while you do your due diligence. There is a shortcut available that allows us to run a server side trace but it does require the use of Profiler – for just a tiny bit. You can start here to find how to do that.

Great, we have a method to create the script. What needs to go into this session? Let’s take a look at that. Ignoring the initial steps to start a profiler session (other than to use a blank template), let’s jump to the event selection tab. From there, we will make two selections (Logon and Login Failed in the Security Audit section) as shown in the next image.

Once the events are selected, go ahead and click run. From there you may stop the session and then continue with the instructions from the previous link on how to script a profiler session.

After scripting this session, I get a script that looks like the following.

I created this script from SSMS 18.0 and find it interesting that the script says “Created by: SQL Server 2019 CTP2.4 Profiler”. Despite the very recent version of SSMS used to create this script, this script will work perfectly fine on SQL Server 2008 or R2.

Once I start the trace on a server, I am ready to do just a little bit more. I want to verify what this script means. Afterall, it is a bunch of numeric values. Let’s look at that with this next query and then compare it to the initial script used to create the trace. The following query requires SQL 2012 or later.

This produces output similar to the following.

As you scroll through the list, you can see the Event Name along with the column name for all events/columns that have been added to the trace. I have highlighted the first event / column mappings to illustrate this relationship.

Cool! Now, I know the session does indeed contain the requisite data that I wanted so it is time to start checking to see what is happening on the server.

Now you are all set to go to start figuring out which logins (if any) are still connecting to the server and using databases on that server.

The Wrap

This article has shown how to audit the logon events for a SQL 2008/R2 instance. It also happens to be an excellent follow up to another recent article – here. As you begin to plan your migration off of the SQL 2008 dinosaur, it is essential to baseline the activity and use of the server. This audit session will help you do exactly that!

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Is That Database in Use?

Does anybody know who or what is using this database? Better yet, is the database being used by anything at all? Have you ever asked those types of questions of your staff in an environment you have inherited?

As silly as it might sound, these are legitimate questions. It is not uncommon to have a relic database hanging around after an application has long since been retired. The database just continues to live on via life support in the dark recesses of your database server. Everybody has forgotten about it yet it consumes precious resources to do little more than exist.

But how do you go about discovering if the database is truly required or if it is just chewing up space and cpu cycles? The answer is to start by discovering if there is any activity in the database at all. Extended Events is an ideal method to audit your database and discover if there is truly activity on the database or not.

DB In Use?

Depending on the version of SQL Server you are using, the method is going to be a little bit different. Each method I use still relies on Extended Events (XE), they just require some slight changes and subtle nuances (particularly for those pesky 2008 and R2 instances out there). Let’s focus on the 2008 method this time around and we will revisit the updated version in a later article.

2008/R2

Let’s first start with the XE session that works well for discovering the frequency of use a database may be encountering.

Inline with the script, I left some interesting notes. First, the note in the target section is of particular interest because it lays out a requirement for this kind of target in 2008/R2. When using the action source type, then the package name must be specified along with the action in two-part naming format.

Next, in the actions, I have a note about sqlserver.database_context. This particular action is deprecated in later versions. Do I need it in this session? No. I put it in there for additional troubleshooting/exploration.

Lastly, I have a note about the event name. I chose database_transaction_begin because this is fired when a database transaction starts. I don’t really care if the transaction completes. I just want to know if an attempt was made to use that database and the data therein.

If I comment out the deprecated action (database_context), I could actually use this session on later versions (as constructed) of SQL Server. It doesn’t matter that the asynchronous_bucketizer has been replaced by the histogram target, the session will still create and work properly. SQL Server knows to automatically update the deprecated targets with the appropriate target when creating an XE Session on a server.

Since this is 2008/R2, we have to use TSQL in order to parse the data. The following query will do that for us!

After executing that query, I will see results listing the activity of each database since the session was started. Here’s an example.

Here we can see there are indeed some databases that are still in use on this server. If we are looking to retire the instance, or migrate relevant databases to a new server, we have better information about how to go about planning that work. Based on this data, I would be able to retire the ProtossZealot database but would be required to do something with the ZergRush and BroodWar databases. In the case of the ProtossZealot database that is not being used, we now have evidence to present back to the team that the database is not used. It is now up to those insisting on keeping it around to justify its existence and document why it must remain in tact.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrates how to use Extended Events to determine if a database is being used by someone or something. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Detail Job History – Back to Basics

Recently, I covered the need to understand job failure frequency and knowing the value of your SQL Agent jobs. You can read the specifics in the article – here.

Sometimes, just knowing the frequency of the job failure is good enough. Sometimes, more information is helpful. Having more information is particularly helpful when meeting with the business to discuss the validity of the job in question.

What do you do in times like this? The most basic answer to that question is – get more data. But that barely scratches the surface. The real question being asked there is how do you go about gathering that data?

There are two methods to gather the data – the hard way and the easy way. Do you like to work hard? Or would you rather work more efficiently?

Efficiency Matters

As was discussed in the previous article, I prefer to do things just a little bit less manually where possible. The consistency of a script matters, but it also is just so much faster than doing things the hard, manual, iterative way. So let’s build a little bit on the script from the previous article.

And here is a sample of the output.

With this script, I have the ability to quick show which step is failing, what the command is for that step, what kind of process is running on that step, any passwords (in the event of an SSIS password), and of course the failure frequency. This is golden information at the fingertips. There is no need to click through the GUI to gather this information. You can get it quickly and easily in one fell swoop.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Job History – Back to Basics

How necessary is that SQL Server Agent job that you have running on the server? I ask that question of my clients on a routine basis.

Sometimes, I will ask that question as a routine part of a health check for the server. Others, it might be due to a migration or upgrade that is required for the server. Generally, the answer to the question will be one of two things: a) “Yes it is necessary.” or b) “What is that job doing?”.

Believe it or not, both answers will typically spawn more questions. You see, I don’t usually ask that question unless the job is failing on a regular basis. You wouldn’t believe how many jobs exist out there that are scheduled and just fail every time as well.

When I encounter a situation like this, it means it is time to have a discussion. In order to have that discussion, one needs to have done a bit of homework in order to better understand the situation. For me, part of this homework involves running the numbers and figuring out the frequency of the job’s failure or success.

Data Gathering

For me, I like to understand how often a job has executed and what is the frequency of failure for that quantity of executions. If I see a job that has not succeeded successfully in 60 consecutive executions, it is probably a safe bet that the job is not needed. Why? Well, if nobody has noticed the job hasn’t been working for that long, the likelihood of the job providing any use to the business is slim to none. In this case, I would present a case to the business as to why it should be removed.

But, how do I get to that point? Well, you could go through the job history for each job one by one and run some manual analytics. Or, you could take advantage of a script. I prefer the script route because it is faster, more reliable and a lot less mundane.

Running that script against my sandbox, I may see something such as the following.

Here you will note that the “wtf” job has two entries. One entry for “Succeeded” (in green) and one entry for “Failed” (in red). Each row receiving counts for number of executions.

This is the type of information I can use in a meeting to discuss with the business. This is no longer a discussion of opinion, but rather one that is now based on facts and data. It now becomes very easy to demonstrate to the business that a job has failed 60/60 times and nobody noticed it or cared enough about the consistent failures to do anything about it. Imagine if the failing job happens to be the database backups. I wonder what the action items for that job failure might include.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Public Role Permissions – Back to Basics

Permissions in the database environment is such an important task. Setting permissions correctly is paramount to a successful audit and one of your best defenses against improper/unwanted access. Yet, in spite of the importance, security is often overlooked, neglected, improperly configured or just flat out ignored. Let’s not forget the times that security is intentionally misconfigured so certain individuals are permitted a backdoor access.

Security, just like performance tuning, is a perpetual (and sometimes iterative) task. There is no excuse for setting your security and forgetting it. It must be routinely reviewed.

While performing a script review for a client, I was reminded of the need to also review their security setup. The reminder was more of a slap in the face as I noticed that the developer had built in some permissions assignments for some upgrade scripts. Unfortunately, we were not permitted to alter any of the scripts due to them being from a third party vendor (and that vendor refused as well to fix the problems with the scripts but I digress).

What could be wrong with this?

I want you to digest that for just a moment. This is an example of the permissions this particular vendor insists on setting for the public role. What could possibly be wrong with that? Let’s examine a couple of the permissions like “Control” and “View Change Tracking”.

View Change Tracking

This permission is an elevated permission that is required in order to use the change tracking functions. This permission is necessary for the following reasons:

  1. Change tracking records contain the PK value for rows that have been deleted. If sensitive information was deleted that a user should not be able to access, the user would be able to reverse engineer the data from the change tracking data.
  2. A user may be denied access to a column that contains sensitive data. If data in the column is changed, the data would be stored in the change tracking and a user can determine the values that were updated for the sensitive data.

Control

I am going to take this one direct from the Microsoft documentation.

Confers ownership-like capabilities on the grantee. The grantee effectively has all defined permissions on the securable. A principal that has been granted CONTROL can also grant permissions on the securable. Because the SQL Server security model is hierarchical, CONTROL at a particular scope implicitly includes CONTROL on all the securables under that scope. For example, CONTROL on a database implies all permissions on the database, all permissions on all assemblies in the database, all permissions on all schemas in the database, and all permissions on objects within all schemas within the database.

Now digest that a bit. Once digested, consider what the public role does to user access in a database. The public role permissions are inherited by all users of the database whether the users have been granted the permission or not. You should only grant permissions to the public role that you really honestly believe that ALL users should have. If you are being serious in your role, then the amount of times you grant permissions to the public role should either be a) never, b) when you want to have a data breach, or c) you are testing in a sandbox to improve your skills.

Check for Perms

When you are uncertain of which permissions have been assigned to the public role, or you just haven’t reviewed your permissions real-estate in some time, it is best to pull out a script and start the process. As luck would have it, I have a few scripts that can help with that (here or here) and I have a new one that I am sharing now.

Let’s start with a basic query that will display all of the permissions assigned to the public role in a specific database.

There is nothing super special about this query. Looking at it, it is querying the permissions for the public role specifically. I display where the permission is a “Deny” or “Grant”. Then we list the permission name and then the schema and the object.

Let’s take that script and evolve it now. I am going to plan for the worst and expect that some permissions have been applied that shouldn’t have by some vendor upgrade script (because – well, history). Since I am expecting the worst, I am going to add some script generating code that will revoke the unwanted permissions. And still expecting the worst would be that revoking the permissions will break something, I will also add some code that can generate the appropriate “Grant” statements.

That looks better. I have a way of identifying the unwanted permissions as well as an easy script I can execute to remove the unwanted permissions. Note the use of the collate in the final two columns. As it turns out, permission_name from sys.database_permissions has a column collation of Latin1_General_CI_AS_KS_WS. Since I ran into some errors (shown below), it is easier to direct the DB engine to use the collation that matches the permission_name column.

Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 5.
Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 6.

Alas, this is still not quite as efficient of a script as I would like. I may have hundreds of databases on the instance and need to evaluate all of them. Time for the bigger guns.

That will take care of all of the permissions for the public role in all of the databases, with a slight caveat. I am only checking against that objects that are not flagged as is_ms_shipped. Now, isn’t there also a public role at the server scope? Indeed there is! Let’s also capture those permissions.

Now, I feel many times better about what could possibly be going wrong with the public role.

If you are in a tightly controlled environment or you are just sick of people doing this sort of thing to your servers, there are more extreme measures that can be taken. You can read about it here or here.

The Wrap

It is amazing what some people will do that just doesn’t make sense. Granting permissions to the public role is one of these cases. That behavior also explains why there are documents and procedures for hardening the public role (here and here).

If necessary, I recommend locking down your public role. It will make your job a little easier and give you better rest at night.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Easy Open Event Log Files

One of the beauties of using a tool such as Extended Events (XEvents) is the versatility and ease of use. XEvents can be used for so many things to help an aspiring DBA or Developer do their job better.

There has long been a sticking point about allowing various people access to production servers. Part of the sticking point is the developer who believes that access to read and evaluate trace files is mandatory on prod (yes I have heard many times where this has been used effectively by developers to gain prod access). But is that prod access truly necessary? Without an adequate method to provide the developer access to the trace data, the DBA loses confidence (from management) and standing ground for their side of the argument. How can a middle ground be met?

XEvents to the Rescue

With the use of XEvent tracing, a DBA is given a new realm of possibilities. This particular realm of possibilities becomes available when the XEvent trace (session) is created using a file target. Through the use of a file target, and the flexibility of XEvents, I have shown the beginnings of how a developer might be able to access the trace data easily.

That is merely the beginning though. There are a few more layers to making life easier for both the developer and DBA in regards to fetching event trace data. This article will show a few methods that will help make life just a bit easier.

Let’s start with the basics. After an XEvent session is created, it is important to know where the event file is being stored. If you don’t happen to know where it is, that is not a problem. The file source is easy enough to find and I have detailed it in this article.

Method 1

As shown in the previous article, the first quick method to open an XEvent trace file is from within SSMS as shown here.

After clicking on “Merge Extended Event Files…”, a new window titled “Merge Extended Event Files” will open. From the new window, follow the following steps.

Method 2

Now, that is an extremely simple method to open and view a trace file for XEvents. That said, would you believe there are other equally easy methods? Let’s look at the next method that is very simple as well.

As was previously mentioned, you will need to know the file location first and then navigate to that location. So let’s do that. My common location is C:\Database\XE. Once navigated to the folder, I need to choose my file to be opened and follow the prompts as shown here.

A new window will pop up. By default, SSMS will be showing. If not, you can select SSMS and enable the option to always use SSMS for this file type by clicking the check box before clicking OK.

Method 3

If the prior two methods feel unreasonably easy, then there is this next method that will spice things up a bit. It does require a bit of coordination. Once you have identified the file to be opened, you can elect to perform a drag and drop operation of the file onto SSMS. Once done properly, the file will open in SSMS and you can start evaluating the data.

Method 4

This is as equally simplistic as the previous method but requires just a scoch less coordination. This method involves a double-click method. A double-click on the file will open the file up in SSMS.

Method 5

If you paid close attention to the screenshots in method 2, you will have noticed that there was a menu option called “Open” that I did not discuss. If you select open from the context menu at that point, then the xel file will open in SSMS just like any of the other methods mentioned thus far.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to better service the needs of the developers when the developers need access to the trace data. This article demonstrated another use for Extended Events. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Implicit Conversion Insights with XE

Implicit

In a recent article, I covered a little bit about implicit conversions and an error that may crop up from time to time. It is one thing to understand what an error means, it is an entirely different problem to understand what is causing the error in the first place.

Seldom does a DBA have the opportunity to get out in front of infrequent or random errors such as implicit conversions. More often than not, it is the privilege of the DBA to find out about the problem after the fact from a developer or, worse yet, an end-user.

What if there was an easy method to track when these errors occurred? Even better, it would be ideal to understand what query caused the error as well as which types of queries are merely causing implicit conversion warnings. Even better, what if you had a way to understand which type of conversion problem is at hand.

Implicit Playground

To start this playground, let’s begin with a simple query based on the AdventureWorks database.

This query will yield a plan similar to the following.

If I hover the select operator, I will see a little window like the following.

I have a few items highlighted in this window. In the larger red box, I am outlining much of the type conversion warning. Inside of that, there is a blue box highlighting the “convert_implicit” warning. If I look further into the error message, there is text about query plan choice and “CardinalityEstimate” or “SeekPlan” for the different convert_implicit warnings. Another way of looking at it is as follows.

This seems a lot more friendly than that little context window. Here we have an easy to follow layout of what is happening. As I drill down into the warnings, I can see there are three convert_implicit warnings and they all trigger a “planaffectingconvert” type of warning. As we look closer at this, I can see there are two distinct types of convert issues.

There is one for the conversion from nvarchar to int in the HumanResources.Employee table for the NationalIDNumber field (green).

Two warnings that state that an index seek has been disabled due to the conversions ( orange and purple).

Curious to see what all of the possible “ConvertIssue” values could be for these implicit conversions? That is easy enough with the following query.

Which shows us the following results.

That is a grand total of three. Having covered the cause of two of these already, all that is left is the type called “unknown”. I think we can safely presume that the cause of this one is something else that is undocumented and hence “unknown”.

So far, so good. We have seen how these warnings can crop up in execution plans. We have also seen how to query the for the “ConvertIssue” types or “plan_affecting_convert_type” from the Extended Events (XE) maps system DMV.

Right about now, the light bulb should be popping up for you. If we can query the XE metadata for the different convert issue types, then does that mean there is a way we can use XE to track these things and review them at our own convenience? Yes there is!

XE

Without further adieu, let’s look at an XE session that can be useful in discovering more information in regards to implicit conversions in the database environment.

This session is pretty comprehensive with the ability to be very much like a fire-hose session if the commented out events are uncommented prior to session creation. Running this on a busy server ( less than 1 minute) for a client with a third party vendor application produces results that look like the following.

As we can see here, the session is trapping when I have any of those pesky implicit conversion errors (red box outline) as mentioned in the previous article, as well as the convert_implicit warnings (purple box outline) that were demonstrated in the execution plans in this article.

How am I trapping the implicit conversion error? That is done via the following piece of the event session shown above.

Error number 257 is the message id for the implicit conversion error. When filtering on that ID for this event, then I will receive only the implicit errors that match that number.

As for the implicit conversion warnings, we have the following that accomplishes that requirement.

The event, plan_affecting_convert should seem eerily familiar at this point. We have seen terminology like that in the execution plans as well as in the query for the map values.

The combination of these events will provide great insight into the issues revolving around any implicit conversions you may be having in your database environment.

Wrapping it Up

Implicit conversions are a fail-safe for when bad design or code (or both) crops up in your environment. When this fail-safe is being used in your environment, it is helpful to gain further insights via Extended Events. This article demonstrated another use for Extended Events. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

 

Implicit Conversion Fail

Implicit

Every now and again, you may run into an error about implicit conversions in your queries. Implicit conversions are not uncommon in the computing world and can be viewed as kind of a fail-safe for when we don’t quite follow decent practices when designing the database or when writing queries or both.

Despite this little fail-safe, there are times when a nasty little error will pop up and cause a bit of consternation.

Implicit conversion from data type %ls to %ls is not allowed. Use the CONVERT function to run this query.

What Went Wrong

Unlike many other errors in SQL Server, this error message makes some sense. The major components of what is wrong are present and you are given a decent idea of what the conversion attempt is that failed. When this particular error happens, you can bet that there are issues with some TSQL code somewhere for sure. In addition, you can bet there is likely a problem with the database design as well. Yay! More work for your back burner.

First, this error comes with an error id of 257 and we can see the message text via the following query. This id is important for when we want to monitor for this problem in the future.

Let’s see how we can recreate this problem.

Which will produce the following.

This is a prime example of a bad query producing an error. Obviously, I am using the wrong data type to try and query the temp table. The ImplicitID column is an integer and I am trying to query it using a date. The quick fix, would be to query the appropriate date column if I must use a date in my query, or i can use an integer to query the ImplicitID column.

After this minor tweak, now the query will work and I start to see results. Given the random nature of the data in this query, the results will vary from batch to batch.

Wrapping it Up

Implicit conversions are a fail-safe for when bad design or code (or both) crops up in your environment. Sometimes, this fail-safe is inadequate and needs further assistance. Sometimes, that may be an explicit conversion and sometimes that means an appropriate rewrite of the query to use the appropriate columns and data types in the queries. This was an introductory article into the world of implicit conversions. There will be a follow-up or two about implicit conversions and monitoring for them. In preparation for the upcoming articles, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

 

«page 1 of 24

Calendar
December 2019
M T W T F S S
« Nov    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Welcome , today is Sunday, December 15, 2019