Incorrect Syntax – What?

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: November 17, 2017

Intuitive errors are very helpful when trying to troubleshoot a problem. On the other hand, when we receive an error that is vague, we sometimes find ourselves wanting to throw the laptop out the window. What do we do though when the error message really does look to be pretty intuitive but we can’t figure out where the problem is?

Today, I am going to cover one of these falsely intuitive errors. As it turns out, this is a problem that I had actually forgotten about since I had not seen it in many years. The error is shown here.

This error looks pretty straight forward, right? It tells me there is an incorrect syntax somewhere in my query. Running into a syntax error somewhere along the line does happen to all of us at one time or another. Because of that, I would not be terribly alarmed by the error and just proceed to fixing the syntax problem. Here is the query that is throwing that error.

Now is the time to start pulling out your hair. There is no syntax error in the query. Go ahead and look it over 10-15 times. I know I did.

Ok, if there is no syntax error, then what could possibly be the real problem? Is the database corrupt? Maybe a system table is corrupt? Grasping at straws here, but could it possibly even be some sort of royally screwed up permissions.

Everything seems to be checking out properly. There is no corruption whatsoever. Laptop is soon to be launched at this point right? Ok, maybe not launched because this is a simple query. But, had this been a production related query that was rather intense and complicated, there really may be something getting launched as the frustration mounts.

Simple Solution

Well, since the error does not fit the crime, maybe this is one of those really odd errors that truly means something else. In this case, if we dissect the query we will probably notice that there is a DMO being called here. If I query sys.stats directly there is no problem but joining to sys.dm_db_stats_properties I get the error. If I run the query from the master database, the query runs without error. In fact, running the query from within a connection to most databases succeeds.

Now we have two clues to the problem. It only fails in some databases and only fails when the DMO is used in the query. Let’s take a look at database properties and see what we can find. In this scenario I am finding this little nugget in the affected databases.

This database happens to be in SQL Server 2000 compatibility mode. I wonder why that may be and proceed to investigating as well as fixing it. Let’s see what happens after changing the compatibility mode to something more recent (and yes this is an OLD version of SQL Server and the whole thing should be updated anyway but that is a discussion for another time).

If we step through this whole script, we will see that this is a script that will reproduce the entire scenario from break to fix. First thing is to set the compatibility mode to 80 (SQL Server 2000), then run the query (results in the error). Next I change the compatibility mode to 100 (SQL Server 2008) and rerun the query (no error this time) and I get all of the stats with properties that I was hoping to get. Then I conclude with confirming that I am in the correct compatibility mode.

Conclusion

Errors will happen as we work with TSQL – that can be expected. Sometimes we type too fast or just miss something here or there. When we get a syntax error, it really sets our mind to focusing on it being a mistake that we made. In this case however, the error is thrown for a syntax problem because the DMO is not accessible in compatibility mode 80.

Due to the straight forward error – we may spend a little to much time looking at the wrong thing. Keep your mind open to the error being elsewhere than your code should you see a syntax error near “.” in your endeavors.

SQL Server Haunt 2017

Comments: No Comments
Published on: November 1, 2017

Halloween is a great time of year. It is unfortunate that it is just one day of the year. That said, I do like to think of the phantasmripmonth of October as Halloween Month. I have several posts over the years that geek out over the cross-over between Halloween and SQL Server.

The undead of Halloween are now upon us. Among the hordes of data zombies roaming the streets in lab-coats and fishnet stockings, few of us are still scrambling to remove the hexes we have looming over our data.

As chance would have it, these hexidecimals, err hexes, have a more profound effect on us than we first thought. Many may have yet to even recognize the impact of the hexes placed along with the monsters that now lurk in the data after having been summoned via those hexes.

DB and Fun Related

Seeing as I am a really big fan of this holiday I have a few Halloween posts over the years. If you are interested in the previous Halloween posts, here is a list of a few of them:

All Halloween posts

That list is my Halloween treat this year. Now for a bit of a trick with a very strong warning. Because of this warning, I am not posting any code showing how to perform the trick.

Warning

The contents of this post are for the intent of HUMOR!

Freddy Kreuger

mangled

I can hear you clamoring from half a world away right now – “There is nothing called ‘Freddy Kreuger’ in SQL Server. What in the world are you talking about?”

You would be very accurate in your exclamation there. But this is not necessarily a strict exercise in feature names within SQL Server. I want you to think a little further outside the norms for a while.

Do you currently or have you ever needed to shred XML? XML shredding via TSQL can be a monstrously bloody killer to your database performance. As it turns out, Mr. Kreuger was also a monstrously bloody shredder.

Jason Voorheese

Yet another beast that is not truly in SQL Server, or is it? A not so new but new feature in SQL Server is called JSON. This feature does actually perform better than XML in some regards. That said, we do have a very common problem between the two of these features – blobs.

If you are not familiar with what that means -here you go. A blob is an overly large item being stored in the database. If you wish, you could correlate that to the other well known Halloween beast – “The Blob”.

Over time, this blob acts like sludge and just slows down your database queries. In addition, like the creature, the blob in your database tends to continue to grow in size and is seemingly never able to be put in check.

Skeletons

When I find skeletons, I have to be honest, I don’t find them terribly frightening. When talking about skeletons in your database, I am even less frightened.

Then again, when I run into the situation as described recently, in this post, I may get a bit of a startle and get just a wee bit concerned.

Overall though, I am rarely startled or frightened by any skeletons in the database. These are really just the supporting structures of a nice secure database and are called “schemas”. See, not really all that frightening here if we think about it just a bit.

This next one however, might be a little harder and should be nearly enough to cause some heart pain.

Warlocks

Surely there are no wizarding type of people in the database, right? Warlocks? I know for absolute certainty that there is no such feature or anything remotely close to a warlock, witch or wizard within the database. That is unless my database is about mystical creatures and people.

Alas, I urge you again to expand the box of perception a little bit and become just a tiny bit imaginative. This one, truth be told, does require a fair amount of explanation and imagination though.

The problem comes in part from some magical data issues that can occur due to this particular feature. In addition, this also comes from the wonderful grammatical errors from various blog posts and forums out there mis-spelling “which” as “witch”. Since “manwich” is really close to “man-witch”, I am calling it a warlock.

Now, since I am calling it a warlock, that leads us to the next strong hint about the feature. “Lock” in this case is the key. Now which magical, imaginative feature might there be that is related to “lock”? That would be the “nolock” directive and all of the data quality issues that it presents. Here is a really really good recap (by Aaron Bertrand) on this feature along with reference to it being “magic” – at this site.

Pirates

This is probably the easiest of the day by far. For all the data loving geeks out there, SQL Server has this pirate flavored way for you to get your drool on. This feature is called “R”. Yup – just like what a pirate says matey.

R is a tool to be used by data scientists or data geeks in general to try and throw together many different flavors of statistical analysis about your data.

Split Brain

Finally, (at least for this Halloween) we have this condition that is real within SQL Server. While treatable with long hours and heavy medication, it is something to be feared.

This condition is something rare but it is very real. The split brain syndrome is pretty much a multiple identity personality disorder in your database. If you have multiple nodes in a cluster, mirror or availability group, it is possible for more than one of those nodes to believe it is the master node and then for different transactions to become hardened in each of those nodes.

When this happens, you will not be able to use bleach to clean up the mess. Instead, you will be required to spend a grundle of time with your database cuddling it and nursing it back to data consistency and good mental health.

Last but not least, HAPPY HALLOWEEN!

What’s that SSIS Password

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: October 30, 2017

A recurring theme over the past several weeks (there are always recurring themes it seems) has been an issue that relates to SSIS. Clients will call for help with an SSIS package that has started failing and they are stuck trying to figure it out. They are all stuck at the same spot in the process – trying to get the package open. As it turns out somebody has decided to password protect the package in each case and the client is unable to open the package because they don’t know the password.

Inevitably they are all stuck at the following screen:

Granted this screenshot is for the password prompt for a project password – it really is the same issue if it is a package password. This seems like pretty good security right? If nobody knows the password then nobody can alter the package, right? At least that seems to be the prevailing course of thought.

Of course this introduces various other issues, one of which I alluded to already – a failing package. What if the requirements change? What if the package needs to be migrated? What if the package needs to be documented from stem to stern? Even better -what if this type of security is not all it is purported to be?

SSIS Security

Microsoft has provided various different “protection” levels for affecting sensitive information within a package. These levels can be set from within the GUI or from the dtutil utility. Here are some of those settings.

Setting Description Value
ServerStorage Rely on SQL Server database roles for protection. Only valid if saved to msdb and not to the file system.
DontSaveSensitive Suppresses the values of sensitive properties in the package when the package is saved. 0
EncryptSensitiveWithUserKey Uses a key that is based on the current user profile to encrypt only the values of sensitive properties in the package. 1
EncryptSensitiveWithPassword Uses DPAPI to to encrypt sensitive values in the package based on a user generated password. 2
EncryptAllWithPassword Encrypt the entire password based on a user generated password. 3
EncryptAllWithUserKey Uses a key based on current user profile to encrypt the package. Only the user that encrypted the package can open or run the package. 4

Based on these descriptions, a look at the package contents (via notepad) would look something like this:

The package will have that ProtectionLevel value added to the xml of the package with the selected value based on the descriptions from the previous table. In the case of this image, it appears that the package in question has been set to “EncryptSensitiveWithPassword” which correlates to a value of 2. This is also what is causing that password prompt to be displayed when opening the package.

So What’s that SSIS Password?

First, a bit of a birdwalk. Some will say you don’t need that password. They say you can go ahead and just change the “ProtectionLevel” value in the project/package xml and all will be rosy good. That may or may not work. Try it at your own risk. That said, if your ProtectionLevel is at a value of 2, there is an alternative method you could retrieve that password. I am going to show you just how to do that.

This method is going to need to assume that the package is being executed via a SQL Agent job and that you also have access to view job properties on the server in question.

Let’s just jump straight to a script to help us with that.

Take note here that I am only querying the msdb database. There is nothing exceedingly top secret here – yet. Most DBAs should be extremely familiar with these tables and functions that I am using here.

What does this show me though? If I have a package that is being run via Agent Job in msdb, then the sensitive information needs to be decrypted somehow. So, in order to do that decryption the password needs to be passed to the package. As it turns out, the password will be stored in the msdb database following the “DECRYPT” switch for the dtutil utility. Since I happen to have a few of these packages already available, when I run this particular query, I will see something like the following in my results.

Now, let’s go ahead and take one of those packages and test this out. I will just take the password that was listed for one of the packages, find the path of that package and then try to open the package using the password I just uncovered. Let’s keep it simple and say I am trying to open the first package called “Sports Adobe Ticket Sales Data Upload”.

After finding the package in question, I enter the password – shown here.

After I click the “OK” button, I am greeted with the following.

Tada! I now have full access to the package fully decrypted. If I need to change the ProtectionLevel, I can easily do that now as well. From here, I am set to proceed with troubleshooting the failure or any of the other reasons I noted for needing to gain access to the package.

 

Conclusion

Occasionally it becomes necessary to access an SSIS package that may be encrypted with a password. This is easily accomplished (unfortunately) if the package happens to be run via a job in SQL Server. While it is possible to gain access to the package, it would be far better to not need to circumvent the security to gain access. Instead, it would seem a much better idea to properly document the necessary passwords etc in a common vault so the appropriate people could access the package in the event of an emergency.

Drop That Schema

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: October 27, 2017

An often under utilized or maybe even mis-utilized feature of SQL Server is a database object schema. In the event of the latter, there is an occasional requirement for change. When this need arises, it can turn into a bit of a problem. That is of course if we are not well prepared. In this article, I am going to explore one possible change – the dropping of a schema.

What is a Schema?

A schema is a distinct namespace to facilitate the separation, management, and ownership of database objects. It removes the tight coupling of database objects and owners to improve the security administration of database objects (Source: Technet).

Much like a skeleton is a distinct system that helps support the structural integrity of certain biological entities, a schema helps to support a distinct functioning and structure within a database. This is, of course, a very simplistic description and analogy, but it works.

Another way to look at a schema is almost like a blueprint of what has been or will be created within the database. Like many blueprints, a database schema can map out all sorts of “sub-systems” at our disposal within the database. Some of these may include functions, procedures, tables, views and so forth.

What happens when you try to take out the entire blueprint without understanding the relationship to all of the subsystems? In the case of the schema it just so happens that you will be presented with an error message informing you that you are trying to do something that is either ill-advised or not supported or both.

Here is an example of such an error.

Msg 3729, Level 16, State 1, Line 2
Cannot drop schema ‘Maintenance’ because it is being referenced by object ‘AutoStatsHistory’.

This error message proves to be helpful. I obviously have some objects bound to the schema that need to be blown away before I can drop the schema. If I only have a few objects, this may not be too terribly annoying to do one by one. But what if I have 100 or more objects? Now that becomes an entirely different story.

Drop that Schema!

I have run into this very issue where there are far too many objects in the schema to be able to drop one by one. Add to the problem that I am looking to do this via script. Due to the need to drop the schema and the (albeit self imposed) requirement of doing it via script, I came up with the following that will cover most cases that I have encountered.

And a sample of the output:

As you can see here in the output, I have set the script to generate a series of drop statements for each of the dependent objects within the schema. In addition, the drop statements are ordered to remove objects that may have dependencies on other objects first. A big key here is that this script does not drop the objects for you. Rather it just creates the scripts to do the work. You must review the output and then execute the scripts separately. That means you will be entirely responsible for the results.

Once all of the objects are out of the way, the last statement in the sequence is to finally drop the schema.

You may notice that there are a few things not included in this script. The most notable may be that the Service Broker related objects are not accounted for in this script. I leave that for a later revision.

Conclusion

Occasionally it becomes necessary to remove a schema from the database for one reason or another (e.g. somebody decided to do both Dev and Prod in the same database separated only by schemas). Dropping a schema can become a little bit of a pain without the right tools. This script will help get you on your way to a scriptable solution (to help with documentation of what was changed) and a little less pain while doing it.

SQL Server Fixed Role Permissions

Roles and Permissions

Some of my recent articles have been focused on permissions and security. There is good reason for that – security is important and all too often it is mis-understood.

You can catch up with a couple of those articles here and here.

It is very important to understand who has what level of access within the server and databases on that server. Sometimes we see users being granted server or database access through the fixed roles available in SQL Server. How exactly do you know what permissions those individuals have via role membership? This article will help to reveal the permissions granted to the various roles and maybe a gotcha or two.

Finding Permissions

There is ample documentation on what the permissions are for each of the various fixed server and fixed database roles in SQL Server. Some of that documentation can be found here and here and here. With all of that documentation, you may be surprised to hear that it is not quite as easy to find the permissions of these roles via queries from within SQL Server – with a caveat. I am going to discuss some documented means to retrieve the permissions for the various roles and also discuss the pitfalls of these solutions.

What are the fixed roles again? Just in case you did not see them in the links from the previous paragraph, I will list the various fixed roles here.

Fixed Server Roles Fixed Database Roles
public public
sysadmin db_owner
securityadmin db_accessadmin
serveradmin db_securityadmin
setupadmin db_ddladmin
processadmin db_backupoperator
diskadmin db_datareader
dbcreator db_datawriter
bulkadmin db_denydatareader
db_denydatawriter

These default roles do have a unique set of permissions for each role. As noted, the permissions are documented well enough. Sometimes, it is preferable to just query the system to retrieve a list of the permissions for each role. This is especially true if one is in need of providing documentation for an auditor on who has what permission.

When trying to query for a list of permissions, one may feel as though they have fallen off their rocker just as the granny in this pic to the left.

Never fear, however, for there is a method to find the permissions of the fixed roles. Let’s take a look at what it takes to query the permissions associated to each fixed role.

System Queries

Unlike most principals, where one can query the various system catalogs to retrieve the permissions assigned to the principal, the fixed roles do not expose the assigned permissions in the same way. With fixed roles, there are two stored procedures that have been created to retrieve the permissions. These stored procedures are sp_dbfixedrolepermission and sp_srvrolepermission.

Immediately I have a “rocker moment” for each of these stored procedures. In the documentation there is a note that states the following.

This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.

When peaking into the secret sauce behind each of these stored procedures, there is nothing extraordinary to how the permissions are retrieved. In fact, both procedures reference a particular object in addition to the catalog view specific to the type of role (e.g server_principals or database_principals. This second object is called sys.role_permissions. Wait, I said there is no direct view or table to query with the permissions, right?

Let’s try to query that table.

Boom! We have just had another “rocker” moment. As it turns out, this table is the secret sauce to the fixed role permissions being accessible via query. This table can be queried from the stored procedures and can be queried direct – if it is queried from a DAC connection. Most will probably not connect to the DAC just to query the role permissions, so what can we do?

Here is a basic script showing what I have done.

In this script, I have taken the results from each of the stored procedures and dumped them into a temp table. Using this temp table, I can now join to this table to get a more complete list of the permissions in effect for various principals. Once that more complete list is made, then it can be handed to the auditors to satisfy them for at least a week before they ask again ;).

Now it is time for yet another “rocker” moment. Look carefully at the output from these stored procedures. Remember the notice that they will be removed (i.e. on the deprecation list)? It seems there is good reason to remove them from SQL Server – the permissions in sys.role_permissions is not being maintained. That is correct! There are permissions listed in the output of these procedures that are no longer applicable!

If the list is not entirely accurate, then what can be done to get an accurate list of permissions? As it turns out, it seems one may need to code a solution that has the permissions hard coded in the script – very similar to what these system stored procedures were doing.

Recap

Capturing fixed role permissions is possible through the use of two system stored procedures. Just like the red telephone booths, these stored procedures are soon to be a thing of the past. These stored procedures are deprecated and may be just as reliable as those old telephone booths.

Too bad there is not a better means to trap the permissions from these fixed roles. It would be really nice to be able to view them just the same as can be done with the other principals (users and logins).

Now go forth and Audit your roles.

PS

What is up with that weird granny pic? Well, it was a challenge from Grant Fritchey to use the image in a technical blog post. You can read the challenge invite over here. And yeah, I know it is some sort of Dr. Who thing.

SQL Server Permissions – Database Roles

securedb

EZ PZ Permission Squeezee

Given the critical level of importance related to permissions, one may think it is a concept that is well understood by all who are given the charge of protecting the data.

There is a great responsibility when the keys to the data kingdom are granted to a DBA. The DBA is responsible for ensuring the integrity and security of the data. To ensure the security of the data, the DBA has the responsibility of granting, revoking, or denying access, at various levels, to the data and objects within the database.

Despite this high visibility and critical nature of this concept, understanding permissions and assigning permissions does not seem to be as straight forward as it should be. Evidence of this is something I see far too frequently in the wild  as illustrated by the following image.

This screenshot is only a part of the problem that I wish to break down and discuss in this article.

SQL Server Permissions

A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. In a previous article, I outlined the different flavors of principals while focusing primarily on the users and logins. You can brush up on that article here. While I touched lightly, in that article, on the concept of roles, I will expound on the roles a bit more here – but primarily in the scope of the effects on user permissions due to membership in various default roles.

Let’s reset back to the driving issue in the introduction. Frequently, I see what I would call a gross misunderstanding of permissions by way of how people assign permissions and role membership within SQL Server. The assignment of role membership does not stop with database roles. Rather it is usually combined with a mis-configuration of the server role memberships as well. This misunderstanding can really be broken down into one of the following errors:

  • The belief that a login cannot access a database unless added specifically to the database.
  • The belief that a login must be added to every database role.
  • The belief that a login must be added to the sysadmin role to access resources in a database.

The experienced professional will likely note that there is a direct conflict between a few of these beliefs. That said, all too often I see all three of these misconceptions implemented in every instance for nearly every user.

Let’s start looking at these misconceptions. To investigate these problems, I will create a login. After creating the login, I will add that login as a member to the sysadmin role. Once the login is added to the sysadmin role, I will then run some simple tests within my DBA database.

Sysadmin

The creation of a server principal (login) and adding the principal to the sysadmin role is fairly simple. The next couple of screenshots are followed by a quick script that will perform the same actions.

As was promised, here is the script that will do the same thing as illustrated in the GUI above.

With the user now in place, let’s test. The primary test here would be that a server principal cannot access the database since explicit database permissions have not been granted. Here is the first batch of tests that I will run.

The first statement is to allow me to impersonate the superuser login. From the impersonated connection, I first check to see I can query the sys.objects system catalog. Then I test the database_principals system catalog. Next in line is to check the list of permissions that have been granted to the superuser account. Each of these queries executes successfully without error. Here is a screen grab for these first three tests.

Notice the first two queries returned an empty set. This is not a failure, rather evidence that the select statement ran successfully. In the third result set, we can see that the superuser account has all sorts of server level permissions. In the result set there was not a single database level permission.

The last query that utilized sp_helprotect returned the following error:

Msg 15330, Level 11, State 1, Procedure sys.SP_HELPROTECT, Line 302
There are no matching rows on which to report.

This is confirmation that there is no database user called superuser.

So I can query a database without my server principal being given direct access to the database (it is worth reiterating here that this principal is in the sysadmin server role), but can I do other things such as create objects? Let’s test that with the following script.

This script is straight forward. All it does is check for a table. If that table exists, then drop it and recreate it. The last little bit will check to confirm the successful creation of the table. This script succeeds as illustrated in the following image.

That should be pretty convincing that if you add a server principal to the sysadmin server role then that user has access to the databases. These tests have illustrated that it is not necessary to add a server principal as a database principal when that server principal is in the sysadmin role (an erroneous configuration often seen). If the database principal is not necessary in this case, then what will happen if a database principal does exist?

Database Principal in Every Database Role

The next logical step in the sequence is to create a database principal for the already created superuser server principal. Once created, we will test to see what effects if any can be observed by having this database principal in every database role as well as the sysadmin role. This will help to test the first two bullet items from the list of common configurations I have seen in the wild. Let’s start with the script that will help create the principal to be used during the next iteration of tests.

The script basically creates a database principal and then adds that principal to each of the default fixed database roles available in SQL Server. Those roles are easily viewed in the first image in this article and are also listed here for ease of reference (intentionally ignoring the public role).

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

The tests for this round of changes will be just like in the previous section. Here is the script to be used for the first batch of tests.

The major differences between this version of the test and the previous iteration of the test is that I have the table still in existence (I did not drop it but that will come shortly) and I have created a database principal so the first two queries will show a single row for each instead of an empty result set. The next significant difference is the last query that utilizes sp_helprotect. Instead of an error like the first time, this execution gives me the following results.

Next I will rerun the test to create an object with the following script:

This script will produce the same results as in the first example. The table, since it was already there, will be dropped and then recreated. After it is recreated, the validation script will find the table and return a single row.

This series of tests has just shown that a database principal tied to a login which is added to the sysadmin role and then added to all database roles will still be able to do pretty much everything a sysadmin can do. To this point, I have shown how bullet points one and three are not sound in reasoning. With that, there is still the test to prove (true or false) the final bullet point that a principal should be added to every database role.

Before testing the validity of the configuration, I want you to look closely at the list of database roles.

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

Just reading the names of these roles should be a good indicator that adding a user to every database role is not a desired configuration. I am sure the question is bubbling up now in regards to some of those “but how is it working in the previous examples?”. The response to that is very easy: “The user is a sysadmin!”.

To test this last bit of the configuration, it is time to remove the server principal from the sysadmin role. I will do that via the following script.

With the user removed from the sysadmin role, it is time to repeat the tests.

The results are significantly different this time.

Msg 229, Level 14, State 5, Line 132
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.
Msg 229, Level 14, State 5, Line 136
The SELECT permission was denied on the object ‘database_principals’, database ‘mssqlsystemresource’, schema ‘sys’.

I just tried to query two objects and was refused due to a denial in each case. What if I try to query that table I created previously?

This will give me the following error.

Msg 229, Level 14, State 5, Line 141
The SELECT permission was denied on the object ‘MySuperTable’, database ‘DBA’, schema ‘dbo’.

Well that is not the desired effect or is it? Remember, I added the superuser principal to every role and that includes the “deny” roles.

How about if I try to drop and recreate that table?

Again, no dice!

Msg 229, Level 14, State 5, Line 157
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.

With this constant blocker of being denied cropping up, let’s take a look at relevant permissions related to the database roles to which I added the superuser principal. To evaluate those permissions, I am going to use the following script that relies on the results of a system stored procedure called sp_dbfixedrolepermission. (Word of advice, this procedure is deprecated and I will be writing more on that in a future article.)

Looking at the results of that script, one can start to see why we are having so many problems executing a select.

In this result set, one can see that by virtue of the db_owner role and the db_datareader and db_datawriter roles, we have permissions that are being GRANTed. Immediately after that, we hit the DENY. This behavior is to be expected. A DENY permission will override a GRANT except when the principal is a member of the sysadmin server role.

Seeing this in action, the next logical step is to remove the role membership from the two “deny” roles (in this scenario where all are checked) for starters. Once done, go ahead and re-run these tests, you will see that things will work differently. While you are at it, take it a step further and rerun the tests after removing superuser from the db_owner role. Think about it, do you really need to have db_owner along with all of the “grant” related roles given that db_owner gives all of those same roles?

One last bit of homework is for you to run the tests another time from the point of when the database principal was created. Only this last batch of tests, impersonate the database principal instead of the server principal. You can do that by doing the following:

Instead of using the following impersonation statement:

You may be interested by the results.

Recap

I have taken you on a journey through three common mis-configurations with principals and permissions. It really is quite unnecessary to add a principal to the sysadmin fixed server role as well as every fixed database role. Even if the principal is not a member of the sysadmin role, it is foolish to add it to every database role. There just is no good logic in setting up permissions in that manner.

Take a moment to focus and you can say EZ PZ permission squeezee.

Finding Deprecated Uses in SQL Server

 

sqlbasic_sarge

How well do you know your environment? You probably know all of the jobs that are running, the frequency that indexes need to be rebuilt, and even which users have which level of access to each object in the SQL Server instance. Do you know that your applications are accessing deprecated datatypes over 300 million times a week? What if your TSQL constructs are a bit archaic? Do you know that the code needs to be updated? Do you know how to find deprecated uses in SQL Server?

In this article, I will explore how to use Extended Events to track feature use and abuse. To be more precise, I will share how this tool can help you better understand all of the ways that your applications have been abusing your database by continuing to employ the use of deprecated features, syntax, or constructs in general. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Audit Deprecated Uses

redxI would dare say that most data professionals think there is some use of deprecated constructs, datatypes or features within their environment. I would double down on that and say that most do not know just how bad it really may be. To find just how bad it really is, we need to audit for the use and abuse of these deprecation events.

Right here would be a good time to point out that your mileage may vary. Some items that are deprecated are more painful than others. Some may be deprecated and may have been on the list for 10+ years at this point. The point is, know your environment and then use good judgement to determine which items in your results need the most attention to fix and update. Why? Well, things really may break especially if you are looking to upgrade to a new version of SQL Server. Just because an item is still available in your current edition, that does not ensure it will still be available in a future release of SQL Server.

Now for the juicy stuff. As I mentioned, finding when and where a deprecated feature or syntax is employed, there are a couple of neat little events within Extended Events that can help to track each time a deprecated feature is accessed or employed. How do we do that? Use the deprecation_announcement and deprecation_final_support events. To help create sessions to track these events, I have the following script to create an “audit” session to do exactly that.

 

And just in case you are using 2008 or 2008R2, use this version instead.

Slight differences between these two sessions. First, in the 2008 version of the script, I rotten_orangehave to provide database ids instead of names. That is a shortcoming of 2008 and 2008R2 implementations of Extended Events. Additionally, the file target is different between the two (recall that they renamed the file target). And lastly, there are a few actions that I included in the 2012 version of the script that are not available in 2008 and R2.

With the session in place, I am now going to run through some sample scripts that will generate deprecation events to occur. I am sticking with my 2014 instance for this segment. That is important to note because different events may occur for different versions of SQL Server. Additionally, the parse script I will share will require a slight change for 2008 and r2 (again related to the file target name).

Now to take a peek at the data with this next script.

Now at long last, I can see what kind of data I am generating (they are really wide so I am just posting a snip).

deprecated_features_results

 

 

 

 

With this, I can see the feature_id along with the description and even the tsql that generated the event. What you don’t see in this is that I also trap the source machine and the user name. If there is an application name included in the connection string, I also trap that. These pieces of data can prove critical to efficiently troubleshooting and finding the source of these events.

From here, one might wish to explore all of the events generated from this session in order to ensure the environment is properly prepared for upgrade. Most tools do not evaluate the code thoroughly to trap all of these events. Instead they do a cursory look through stored procedures or at the data types. As we all should know, not every piece of SQL code is actually stored in the database or even is it cached at the time of analysis. This is the type of thing that requires a long running trace to prove that you are that rockstar DBA.

Conclusion

In the article today, I have shown how it is possible to see the deprecation alerts that may be generated in your environment. This data is what can help set you apart as a rockstar when it comes time for that migration. If you have yet to read my series on Extended Events, I highly recommend it. You can find that series here.

Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Events By Feature

sql_features_xeWithin the world of SQL Server there are a few things one can be certain of – things will change. This is true of the features in SQL Server. Additionally, Extended Events is constantly evolving which underscores this constant change.

With all of this change occurring within SQL Server, sometimes it is difficult to figure out how to troubleshoot or track issues that relate to new features. Within the need to figure out how to troubleshoot the new features, there is the need to understand what tools are out there to help troubleshoot.

It is no big secret that Profiler cannot help you trace any of the new features. To help perform the task of tracing events related to new features one must take advantage of the power of Extended Events. Even knowing that you can use XEvents, there is still a need to know what is related to the feature.

In this article, I will show how you can discover the events related to specific features. In addition, I will provide enough base information to help you perform other discovery type queries on your own as you continue your exploration of XEvents. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Events and Features

More and more I am being asked how to track which events belong to which features. I have also been seeing more people ask for a way to list the new features supported by XEvents. Thankfully there is adequate information within the XEvent metadata to help retrieve this type of information. The downside is that there is a bit of homework that must be done across a few versions of SQL Server to help produce the desired information. I have done that work and pulled the information into the following query to help make it easier for anybody else wishing to dive in and compare features and events across versions of SQL Server.

In the preceding query, I have provided an aggregation of the events and features across multiple editions of SQL Server. This aggregate of data is necessary to help see how the features are changing and how the events are changing within the features that remain constant from one version to the next.

Once that data is aggregated, I then provide a couple of sample queries that show what can be done with the data. I need to stress here that I have not provided an exhaustive set of queries to explore this data seven ways to Sunday. I have left those types of exercises up to the reader to explore and experiment.

The first example query shows how to pull the new features that can be “Traced” in the current version of SQL Server on which the query is being executed. The second example provides a difference count between versions of SQL Server to help illustrate the evolution of XEvents within SQL Server. The third query is a simple query to list out the number of events for each feature in your version of SQL Server.

From here, one might wish to explore all of the events that are related to a specific feature. This would be easily accomplished by querying out the data from the #presel temp table based on the feature name. Orrrr…one could query the desired feature by following the guidelines in this article.

Conclusion

In the article today, I have shown how it is possible to see the new events and how they relate to the various features within SQL Server. Being able to correlate events that can trap information about new features can and will help you evolve into that rock-star DBA you are trying to become!

Finding the Right Path

xe_path1I have a fairly large backlog of articles that are in progress or that are planned for my series about Extended Events. Despite the backlog and planned articles, every now and then something else comes up that bumps things around. This article is an example of bumping the schedule around. You can see some of the backlog and the full list of articles in the series by visiting the table of contents – here.

I bring that up for a couple of reasons. First and foremost being that the topic was recently raised as a “need” by some colleagues. The second being that I see the need and how it was lacking in coverage by anything I had already written.

What is this gaping hole in the coverage of Extended Events? To be honest, it is not a very complicated topic or very difficult gap to fill. It’s just something that has been overlooked. The gap boils down to this: how does one consistently find the correct path to the Extended Event Log file (XEL file)?

Filling the Gap

The gap I will be working to fill in this article deals with consistently finding the file path for Extended Event (XE) sessions. This gap rises due a few different things such as the ability to define a target in different manners, being able to move the logs directory, or even the fact that a target may not be added to the session (let alone a file target). These causes can all contribute to a bit of frustration and may pose as different symptoms when trying to get the file path.

One additional complication is tied to the running state of a session. For that complication, I talked briefly about it in my article about better practices. The state of the session could have an impact and could cause frustration when trying to retrieve the file path. All of these things are considerations that must be made when trying to retrieve the file path.

To find the file path, let’s start with some basics. The default path for XEL files is in the log directory of the instance. In addition to this default behavior, each session stores metadata about running sessions as well as deployed sessions that is accessible from DMVs and system catalogs.

Log Path

The first basic to tackle is the log path for the instance. I can query for the log path of the error log for the instance and rely on that as the path of my xel files. Should I choose this method, then I could execute the following query.

Executing that query on my SQL 2014 instance produces the following results:

errorpath

Now the obvious problems with this method come from the fact that relying on this data is relying upon an assumption that you have set all of your XE Sessions to use the default log path. If you have declared your sessions to use a file target and did not specify a path, then the assumption is safe. However, if you are detail oriented, you probably have been somewhat explicit in how you define your event file target. This brings us to the next topic – defining the path.

Defining The Path

While a bit of bird-walk, it is necessary to cover this topic at this juncture. This brief discussion will help to understand some of the other issues with retrieving the path consistently.

When defining the path of the event path, there is a bit of flexibility in how one can define the file to be used. You can either declare the file as just the file name, or you can define the file as the folder path along with the file name. Both methods are completely legitimate. Unfortunately, this flexibility is what causes some of the pain with retrieving the file path consistently.

Let’s take a look at two quick, and acceptable, ways to add an event file to an XE Session. I will be re-using a session from a previous article for these examples.

This is a pretty standard format I use for creating my sessions. The section I want to highlight though is the set of the filename near the end of the script. I will generally define the entire path for my xel files when I create a session – just as I have done in this example. Defining this path helps me to know exactly where I am putting the session for starters. Equally as important is that this path is easier to retrieve from metadata because I have explicitly defined the path.

Take this next example of the same session but with one minor difference.

The minor difference in this example is just in how the filename was defined. It is technically accurate and acceptable to only use the filename instead of the path and filename as I did in the previous example. The problem here comes from the retrieval of the path from metadata.

Getting the Path

In the previous two examples, I showed two methods of defining the filename within a session. In the former example, this means I can more easily find the path. However, if I use the script in the beginning of the article to find my xel file paths, then I have made an inaccurate assumption. This leads to an inconsistency in how the file path is fetched. In the latter example, the assumptions concerning the default log path would be valid and could be applied here.

Since the assumptions do not prove to be consistent or accurate across the board, we need to both evaluate how the data for each would look and we would need to see how to retrieve this path more consistently. First up is looking at how the data for each of these example sessions would be stored in metadata.

In the preceding query, I am looking at the deployed session metadata for two sessions, both of which start with “AuditSelect”. In the sys.server_event_session_fields view, there is an attribute called name that contains the value “filename”. This value will only exist if the session has an event file target defined for the session. Running the query will yield the following results (on my system).

eventfile_meta

In this result set, I can see there is one of each of the two event file definition methods I described in the previous section. In the green highlight you will see that I have the entire filepath. In the red highlight, you will only see the filename. Nothing special has been done to this data in the return as you can verify from the posted script. These are the actual stored values. So this would indeed seem like we have a bit of a problem, right?

Well, this is where we get to be a little bit imaginative and employ a script such as the following:

Wow, that is considerably longer than the first example in the article. That is true! This script accounts for a few of the conditions that cause inconsistencies (but not yet all of them) in retrieving the file path for a deployed event session. The key here is to understand this will fetch the path whether you enter a complete path or not while creating the session. I also through a check in there to confirm that the session has a file target as well as a check to ensure the session is deployed to the server.

Looking at the output, I get results that are somewhat friendly and easy to understand without any of the guesswork.

xelpath_result

What if the session has never been started and did not have the full path declared? Well, that is one of the inconsistencies I am working on still.

There is an alternative to this method as well. For the time being, this method would also be recommended in the event the session being researched happens to be one of the system sessions that is “private”. This next code chunk will show two different methods to parse the file path from running session metadata in the DMVs.

Conclusion

In the article today, I have shown some of the internals to retrieving file paths for Extended Event Sessions. I dove into metadata to pull out the path for the session and discussed some concerns for some of these methods. In the end, you have a few viable options to help retrieve the file path in a more consistent fashion.

«page 1 of 16








Calendar
November 2017
M T W T F S S
« Oct    
 12345
6789101112
13141516171819
20212223242526
27282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, November 24, 2017