T-SQL Tuesday #108: New Horizons Beyond SQL Server

Comments: No Comments
Published on: November 13, 2018

There comes a point in one’s career when a change is requisite. Big or small there always seems to be a tipping point that mandates some sort of change. Maybe the change is an entirely new career field. Maybe the change is adapting to the ever improving features of a specific software or product. Maybe, that change means learning a tangential technology.

This is precisely the goal Malathi Mahadevan (b | t) seems to have envisioned for the 108th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

So the challenge for this T-SQL Tuesday is – pick one thing you want to learn that is not SQL Server. Write down ways and means to learn it and add it as another skill to your resume. If you are already learning it or know it – explain how you got there and how it has helped you. Your experience may help many others looking for guidance on this.”

Personally, I am not one to settle, so learning and improving are important. New technologies, changes in technologies, new features, tangential technologies – they are ways to continue to learn and improve – most of the time. Sometimes, a new technology offers a good change of pace and offers an exit from something that is becoming too standard, while providing an entrance to something different, difficult, exciting and expanding.

Through the year (2018), I created a few goals for myself around some of these new or different technologies:

  1. Become proficient at MySQL (maybe even certify who knows)
  2. Become proficient at PowerShell
  3. Work towards the TCM (I got to busy with 1 and 2 to even accord any time towards this one)

Proficient is sort of a vague term because it can have a different meaning to different people. For me, I will describe what I have been doing to become proficient in both PoSh and MySQL.

PowerShell

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

I have dabbled in PowerShell over the years – just enough to “be dangerous” as some may say. I wouldn’t call it proficient and probably not even dangerous really. I was able to muddle my way through being able to create some basic scripts to perform certain tasks.

This rudimentary ability just isn’t enough to be able to put the skill on a resume (imho). It certainly wasn’t enough skill to be able to manage a large group of servers and perform various enterprise type tasks. My objective was to be able to create a set of tools for myself that I could use and have repeatable success at great ease.

I sat down with my first set of various tasks I wanted to be able to perform and worked on them as I set time aside to learn PoSh better – every day. It took several weeks and by no means am I on the same level as Rob Sewell (b | t) or Chrissy LeMaire (b | t) or Sean McCown (b | t). That said, I do feel I am far more capable in PoSh now than I was last year.

This is a skill that I plan to continue to hone. I am learning every time I pick it up and try to do something different. It is good that I am able to keep learning. Next year, I plan on being able to say I am more proficient than I am now. I am also hopeful to be able to be good enough to properly contribute to the dbaChecks project. I do also hope to share some of the scripts I have created on my blog as well.

Here are some of my first dabbles with powershell that I have integrated into other blog posts. Yes, they are very rudimentary.

MySQL

Learning MySQL is probably not too big of a stretch to be honest. I certainly think it is far more similar to SQL Server in many regards than PoSh. After all, it is just another little DBMS and does happen to be rather popular.

I wanted to pick up MySQL so I could support clients that have it installed here there and everywhere in their environments. Many clients have more than one DBMS platform and it is good to understand and be able to administer multiple platforms with a high level of competence. Unfortunately, MySQL comes with a fair amount of gotchas. There are serious limitations depending on version and flavor. Some clients may be stuck on a rather old version of MariaDB (akin to SQL 2000). This means developing a broad set of scripts and skills quickly and on the fly.

I have a ways to go in my learning due to the varied flavors of MySQL but I am getting there. I do feel pretty comfortable hopping in and troubleshooting performance issues and doing a quick health assessment at this point. I would call that proficient. Similar to what I said about PoSh, I plan on being able to say next year that I am more proficient. More opportunity with this platform is what lends itself to better proficiency.

TSQL2sDay150x150The Wrap

I believe in continuous integration / improvement when it comes to personal growth and development. It is necessary to keep your personal skills sharp as well as keep yourself marketable.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Audit Database Offline Events

Categories: News, Professional, Scripts, SQLBP, SSC
Comments: No Comments
Published on: November 13, 2018

Knowing when an event occurred within the database environment is a very important thing. Being able to act quickly when certain events occur is equally as important. Sometimes, we may not find out about an event for a few days or weeks and then we are asked to figure out the who, when, why and how of the event. Hopefully there was enough foresight in place to plan for such a request to travel back in time.

Recently while networking at a large database related conference, I had the chance to discuss some of this with a semi-desperate DBA. In his environment, apparently a database can be taken offline at random times without the involvement of the DBA team. He was looking for a solution to prevent it from happening. For me, the simplest solution is to revoke permissions from the person doing it. If you can’t revoke the permissions, then at least capture the pertinent information to document who and when the databases are being taken offline.

Auditing and Facts

I am a huge fan of auditing the database environment to gather facts. There is comfort that can be gained in knowing that if a change happens to a database or server under your purview, that you can report some facts back up the chain when a post-mortem is required. I recently shared another example of such an auditing scenario – here.

There is a technique in that article that may be of interest for you if you are restricted to implementing this kind of audit. I won’t go into detail about the poor man audit techniques in this article, so it is worth reading that previous article.

When looking to find data about a database being taken offline, one could capture the data from the default trace or from the SQL Server Error Log. While this article will show how to capture this data from both of those sources, it is important to understand that the data just might not be available in either location depending on settings and activity of the server combined with how old the offline event was.

Lets first take a look at the default trace. That one should be easy enough.

And a snippet of some possible output looks like the following:

If you look closely at the output, you will probably notice that I don’t get a lot of information. I find out that there was an ALTER event to a database but I do not know what the event was nor do I see the text from the SQL statement that caused the change. So, at best, I get a partial view of the picture by looking at just the default trace.

With that in mind, let’s look at the error log and see if that provides better information.

This is pretty cool here. I am checking all of the log files available on SQL Server and scanning for a search term of “offline.”

Ok, this shows a little bit of promise. I can see that the database

was indeed set to offline (or “option OFFLINE” was set to ON) with a spid and a timestamp. Unfortunately, the data shown here is far from being super useful in a post-mortem. The error log does not show who took the database offline.

What if, there was a way to combine these two queries into a single result set and better correlate the results into something highly useful? Something like that seems like it could be possible given we have a spid and timestamp in each result set.

Let’s try it and see.

If I use the timestamp from each result and the spid from each result set, I can surely join the two results together. As you can see, that is exactly what I did. I had to format the spid and timestamp a little bit – but that is ok. With that little bit of magic, I get a result set similar to the following now.

Now, I can easily see who put the database into offline mode. I can also tell you what the statement was when they executed the offline. I also know what the timestamp was when the database was put offline. This is usable data now when it comes time for a post-mortem or fact finding request.

The Wrap

Is this the most eloquent method to trap this kind of data? Absolutely not, but it is usable. When it comes to something more sleek and modern, I would recommend a better tool such as Extended Events. I will show how to do this same task with greater ease and reliability through the use of XE in my next article.

Oh, if you are interested in any of my other articles about Auditing or Extended Events, I recommend you read start in here (for Auditing) or here (for XE).

T-SQL Tuesday #104: Just Can’t Cut That Cord

We all have our favorite scripts, tools or utilities. Those are the things that help make our jobs easier. Some of us may have an unhealthy relationship with some of those scripts (similar in nature to the relationship many have with their phone). Whether or not the need to cut that proverbial cord exists, today we are not discussing the health of that dependence. Suffice it to say, sometimes we simply need to upgrade our scripts. How else can we get better scripts or make our scripts better – by sharing them.

This is precisely the goal Bert Wagner (b | t) seems to have envisioned for the 104th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

For this month’s T-SQL Tuesday, I want you to write about code you’ve written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn’t need to be completely original either – maybe you’ve improved the code in some open source project to better solve the problem for your particular situation.”

There is a high probability that through the sharing of your script, somebody out there can benefit from that script. In addition, it is very likely that somebody will make a suggestion to help make your script better. Worst case (emphasis on worst case here), you have the script stored somewhere with half decent instructions on what it does and making it easily accessible for you to use again and again. Just in case you forget you have it out there – you can google for it again and find it on your own blog ;).

Personally, I have been able to find and re-use some of my older scripts. Not only do I get to re-discover them, but I also get to re-imagine a new use or improvement for the script.

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Easy Access

While pondering the topic for today, I had the thought occur about how frequently I post a script on my blog already anyway. An easy out for this topic would have been to re-share one of those old scripts. For instance, I could easily redo a recent article about server access that has a couple scripts demonstrated in it. Or I could go back a few years to my articles about foreign keys (here or here) and space use (here or here). Even more intriguing could be to re-envision some of my articles on Extended Events. But where would the fun in that be?

Rather than take the easy road and rehash something, I have something different. This one goes hand in hand with the numerous articles and scripts I have previously provided on auditing – yet it is different.

Not every shop can afford third party software or even Enterprise edition and so they have to come up with a different way to audit their database instances. One of the problems with a home grown solution is to ensure the data is not stored local to the server (lots of good reasons for that). Here is an example of what I did for one client that happened to have a developer that found a back door that was giving him SA access to the SQL Server Instance and was changing things and trying to cover his tracks – even after being warned.

First the query

This query will be run from a job on a different server that is restricted in access to just a select few people. I do rely on the use of the default trace in this query. I am also reliant upon a little bit of sneaky behavior. If I run this from a separate server, prying eyes are usually unlikely to find that it is running and thus makes it easier to catch them red-handed. In addition, if they discover via some sort of trace and by a lot of luck that it is running, then they have no access to the remote server to alter anything that was captured.

The query does go out to the default trace and pull back any changes to permissions or principals on the server in question. The captured data is then stored in a database that is also restricted to a select few people. Lastly, the captured data can be routinely queried, or automated reports can be created to send email notifications of changes encountered.

The second part of the trickery here is that I am using a linked server to perform the queries (a slight change and I could also do this via powershell which will be shown in a future article). The linked server query uses the openquery format and sends the default trace query to the remote server. Since I am running this from a job on an administrative server that pulls a limited data set, I am not overly concerned with the linked server setup here.

Storing It

Once I query the data, I need to put it somewhere on my administrative server. The table setup for that is very straight forward.

After creating this table, I am ready to store the data. All I need to do is throw the audit query into an agent job and schedule it to run on a regular schedule. For my purposes, I usually only run it once a day.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Failed to Create the Audit File

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 31, 2017

One day while checking things for clients, I happened across a fun little error message – “SQL Server Audit failed to create the audit file“. It just so happens that the audit had been working and then suddenly stopped and started flooding the error logs with this message.

Why would it suddenly stop working? Well, it says in the error that the disk might be full or that there may be a permissions issue. So, at least there are some possibilities provided by the message. Granted – neither of these options is very settling for a DBA.

FAILED

While reading the first occurrence of this error message, you notice that the log is bombarded with 20 more messages of the same nature. Before you can even scroll to catch up with the error messages, another 30 have appeared. It looks like the server is starting to get busier with the business users starting to run through their daily routines. You need a fix and you need it quick. You copy and paste the error to another screen for reference and then close the log to remove that distraction. Here is a copy of that error message.

SQL Server Audit failed to create the audit file ‘C:\Database\XE\DBA_Server_Audit_906B13C3-8F3F-4CFC-A391-20C5F7CAD698.sqlaudit’. Make sure that the disk is not full and that the SQL Server service account has the required permissions to create and write to the file.

Let’s try the suggestions from the error message starting with security. The SQL Server Service account needs to have permissions to the directory where the audit is being stored. Looking in that directory, you can see that the audit was obviously successful at one point because there are audit files in the directory. In addition, the audit just barely stopped working and you are certain nothing has changed.

Regardless of that, you proceed to investigate the permissions settings on the directory. Looking at the directory permissions, you are able to fully confirm that the service account does indeed have adequate permissions to the folder.

So we can rule out the permissions having changed as being a viable contender for causing this problem. As you start to proceed to investigate the next option, you start to worry that users are being prevented from doing their jobs because of the flood of errors. For sanity sake, you run a few quick checks to verify things look normal on the server from an activity standpoint. You also check your ticket queues and find there is nothing alarming in there.

Phwew!

Still Trying

After checking the ticket queues and server activity, you bounce right back to your next check point – disk space. This is an easier check than the permissions. You have no mount points and you can verify the disk space with a quick glance in windows explorer. You look in windows explorer and can see that your C drive where the audits are being stored has 50% free space (or roughly 200GB).

Well, that is obviously not the problem either. You know the audit was working as recently as 30 minutes before you started troubleshooting and the errors did not start until almost immediately before you started checking the problem. What could it be? Afterall, you have 35 audit files in the audit folder for the trace. Then, suddenly, it hits you. There are 35 files. The trace was configured for 35 files with no rollover.

In this case, the easy fix is to move a bunch of files to an archive folder. As soon as that is done, a new message will appear in the error logs:

Message
Audit: Server Audit: 65536, State changed from: TARGET_CREATION_FAILED to: STARTED

Now to go change the audit process to make it a little more robust.

When dealing with SQL Audit, max_files is an important setting. Here is what msdn has to say about the setting.

MAX_FILES =integer

Specifies the maximum number of audit files that can be created. Does not rollover to the first file when the limit is reached. When the MAX_FILES limit is reached, any action that causes additional audit events to be generated will fail with an error.

This article is just one of several audit related articles on this blog. You can read more about some of the different ways to audit along with different perspectives from this link.

Wrap

Auditing is a necessity but it doesn’t need to be alarming or scary. Sometimes, we can become a little bit alarmed when an error occurs. We just need to keep our cool and trust our skills and abilities to troubleshoot in the event an audit fails. This article will hopefully show some of that process and help to provide a cool demeanor.

SQL Server Fixed Role Permissions

Roles and Permissions

Some of my recent articles have been focused on permissions and security. There is good reason for that – security is important and all too often it is mis-understood.

You can catch up with a couple of those articles here and here.

It is very important to understand who has what level of access within the server and databases on that server. Sometimes we see users being granted server or database access through the fixed roles available in SQL Server. How exactly do you know what permissions those individuals have via role membership? This article will help to reveal the permissions granted to the various roles and maybe a gotcha or two.

Finding Permissions

There is ample documentation on what the permissions are for each of the various fixed server and fixed database roles in SQL Server. Some of that documentation can be found here and here and here. With all of that documentation, you may be surprised to hear that it is not quite as easy to find the permissions of these roles via queries from within SQL Server – with a caveat. I am going to discuss some documented means to retrieve the permissions for the various roles and also discuss the pitfalls of these solutions.

What are the fixed roles again? Just in case you did not see them in the links from the previous paragraph, I will list the various fixed roles here.

Fixed Server Roles Fixed Database Roles
public public
sysadmin db_owner
securityadmin db_accessadmin
serveradmin db_securityadmin
setupadmin db_ddladmin
processadmin db_backupoperator
diskadmin db_datareader
dbcreator db_datawriter
bulkadmin db_denydatareader
db_denydatawriter

These default roles do have a unique set of permissions for each role. As noted, the permissions are documented well enough. Sometimes, it is preferable to just query the system to retrieve a list of the permissions for each role. This is especially true if one is in need of providing documentation for an auditor on who has what permission.

When trying to query for a list of permissions, one may feel as though they have fallen off their rocker just as the granny in this pic to the left.

Never fear, however, for there is a method to find the permissions of the fixed roles. Let’s take a look at what it takes to query the permissions associated to each fixed role.

System Queries

Unlike most principals, where one can query the various system catalogs to retrieve the permissions assigned to the principal, the fixed roles do not expose the assigned permissions in the same way. With fixed roles, there are two stored procedures that have been created to retrieve the permissions. These stored procedures are sp_dbfixedrolepermission and sp_srvrolepermission.

Immediately I have a “rocker moment” for each of these stored procedures. In the documentation there is a note that states the following.

This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.

When peaking into the secret sauce behind each of these stored procedures, there is nothing extraordinary to how the permissions are retrieved. In fact, both procedures reference a particular object in addition to the catalog view specific to the type of role (e.g server_principals or database_principals. This second object is called sys.role_permissions. Wait, I said there is no direct view or table to query with the permissions, right?

Let’s try to query that table.

Boom! We have just had another “rocker” moment. As it turns out, this table is the secret sauce to the fixed role permissions being accessible via query. This table can be queried from the stored procedures and can be queried direct – if it is queried from a DAC connection. Most will probably not connect to the DAC just to query the role permissions, so what can we do?

Here is a basic script showing what I have done.

In this script, I have taken the results from each of the stored procedures and dumped them into a temp table. Using this temp table, I can now join to this table to get a more complete list of the permissions in effect for various principals. Once that more complete list is made, then it can be handed to the auditors to satisfy them for at least a week before they ask again ;).

Now it is time for yet another “rocker” moment. Look carefully at the output from these stored procedures. Remember the notice that they will be removed (i.e. on the deprecation list)? It seems there is good reason to remove them from SQL Server – the permissions in sys.role_permissions is not being maintained. That is correct! There are permissions listed in the output of these procedures that are no longer applicable!

If the list is not entirely accurate, then what can be done to get an accurate list of permissions? As it turns out, it seems one may need to code a solution that has the permissions hard coded in the script – very similar to what these system stored procedures were doing.

Recap

Capturing fixed role permissions is possible through the use of two system stored procedures. Just like the red telephone booths, these stored procedures are soon to be a thing of the past. These stored procedures are deprecated and may be just as reliable as those old telephone booths.

Too bad there is not a better means to trap the permissions from these fixed roles. It would be really nice to be able to view them just the same as can be done with the other principals (users and logins).

Now go forth and Audit your roles.

PS

What is up with that weird granny pic? Well, it was a challenge from Grant Fritchey to use the image in a technical blog post. You can read the challenge invite over here. And yeah, I know it is some sort of Dr. Who thing.

SQL Server Permissions – Database Roles

securedb

EZ PZ Permission Squeezee

Given the critical level of importance related to permissions, one may think it is a concept that is well understood by all who are given the charge of protecting the data.

There is a great responsibility when the keys to the data kingdom are granted to a DBA. The DBA is responsible for ensuring the integrity and security of the data. To ensure the security of the data, the DBA has the responsibility of granting, revoking, or denying access, at various levels, to the data and objects within the database.

Despite this high visibility and critical nature of this concept, understanding permissions and assigning permissions does not seem to be as straight forward as it should be. Evidence of this is something I see far too frequently in the wild  as illustrated by the following image.

This screenshot is only a part of the problem that I wish to break down and discuss in this article.

SQL Server Permissions

A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. In a previous article, I outlined the different flavors of principals while focusing primarily on the users and logins. You can brush up on that article here. While I touched lightly, in that article, on the concept of roles, I will expound on the roles a bit more here – but primarily in the scope of the effects on user permissions due to membership in various default roles.

Let’s reset back to the driving issue in the introduction. Frequently, I see what I would call a gross misunderstanding of permissions by way of how people assign permissions and role membership within SQL Server. The assignment of role membership does not stop with database roles. Rather it is usually combined with a mis-configuration of the server role memberships as well. This misunderstanding can really be broken down into one of the following errors:

  • The belief that a login cannot access a database unless added specifically to the database.
  • The belief that a login must be added to every database role.
  • The belief that a login must be added to the sysadmin role to access resources in a database.

The experienced professional will likely note that there is a direct conflict between a few of these beliefs. That said, all too often I see all three of these misconceptions implemented in every instance for nearly every user.

Let’s start looking at these misconceptions. To investigate these problems, I will create a login. After creating the login, I will add that login as a member to the sysadmin role. Once the login is added to the sysadmin role, I will then run some simple tests within my DBA database.

Sysadmin

The creation of a server principal (login) and adding the principal to the sysadmin role is fairly simple. The next couple of screenshots are followed by a quick script that will perform the same actions.

As was promised, here is the script that will do the same thing as illustrated in the GUI above.

With the user now in place, let’s test. The primary test here would be that a server principal cannot access the database since explicit database permissions have not been granted. Here is the first batch of tests that I will run.

The first statement is to allow me to impersonate the superuser login. From the impersonated connection, I first check to see I can query the sys.objects system catalog. Then I test the database_principals system catalog. Next in line is to check the list of permissions that have been granted to the superuser account. Each of these queries executes successfully without error. Here is a screen grab for these first three tests.

Notice the first two queries returned an empty set. This is not a failure, rather evidence that the select statement ran successfully. In the third result set, we can see that the superuser account has all sorts of server level permissions. In the result set there was not a single database level permission.

The last query that utilized sp_helprotect returned the following error:

Msg 15330, Level 11, State 1, Procedure sys.SP_HELPROTECT, Line 302
There are no matching rows on which to report.

This is confirmation that there is no database user called superuser.

So I can query a database without my server principal being given direct access to the database (it is worth reiterating here that this principal is in the sysadmin server role), but can I do other things such as create objects? Let’s test that with the following script.

This script is straight forward. All it does is check for a table. If that table exists, then drop it and recreate it. The last little bit will check to confirm the successful creation of the table. This script succeeds as illustrated in the following image.

That should be pretty convincing that if you add a server principal to the sysadmin server role then that user has access to the databases. These tests have illustrated that it is not necessary to add a server principal as a database principal when that server principal is in the sysadmin role (an erroneous configuration often seen). If the database principal is not necessary in this case, then what will happen if a database principal does exist?

Database Principal in Every Database Role

The next logical step in the sequence is to create a database principal for the already created superuser server principal. Once created, we will test to see what effects if any can be observed by having this database principal in every database role as well as the sysadmin role. This will help to test the first two bullet items from the list of common configurations I have seen in the wild. Let’s start with the script that will help create the principal to be used during the next iteration of tests.

The script basically creates a database principal and then adds that principal to each of the default fixed database roles available in SQL Server. Those roles are easily viewed in the first image in this article and are also listed here for ease of reference (intentionally ignoring the public role).

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

The tests for this round of changes will be just like in the previous section. Here is the script to be used for the first batch of tests.

The major differences between this version of the test and the previous iteration of the test is that I have the table still in existence (I did not drop it but that will come shortly) and I have created a database principal so the first two queries will show a single row for each instead of an empty result set. The next significant difference is the last query that utilizes sp_helprotect. Instead of an error like the first time, this execution gives me the following results.

Next I will rerun the test to create an object with the following script:

This script will produce the same results as in the first example. The table, since it was already there, will be dropped and then recreated. After it is recreated, the validation script will find the table and return a single row.

This series of tests has just shown that a database principal tied to a login which is added to the sysadmin role and then added to all database roles will still be able to do pretty much everything a sysadmin can do. To this point, I have shown how bullet points one and three are not sound in reasoning. With that, there is still the test to prove (true or false) the final bullet point that a principal should be added to every database role.

Before testing the validity of the configuration, I want you to look closely at the list of database roles.

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

Just reading the names of these roles should be a good indicator that adding a user to every database role is not a desired configuration. I am sure the question is bubbling up now in regards to some of those “but how is it working in the previous examples?”. The response to that is very easy: “The user is a sysadmin!”.

To test this last bit of the configuration, it is time to remove the server principal from the sysadmin role. I will do that via the following script.

With the user removed from the sysadmin role, it is time to repeat the tests.

The results are significantly different this time.

Msg 229, Level 14, State 5, Line 132
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.
Msg 229, Level 14, State 5, Line 136
The SELECT permission was denied on the object ‘database_principals’, database ‘mssqlsystemresource’, schema ‘sys’.

I just tried to query two objects and was refused due to a denial in each case. What if I try to query that table I created previously?

This will give me the following error.

Msg 229, Level 14, State 5, Line 141
The SELECT permission was denied on the object ‘MySuperTable’, database ‘DBA’, schema ‘dbo’.

Well that is not the desired effect or is it? Remember, I added the superuser principal to every role and that includes the “deny” roles.

How about if I try to drop and recreate that table?

Again, no dice!

Msg 229, Level 14, State 5, Line 157
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.

With this constant blocker of being denied cropping up, let’s take a look at relevant permissions related to the database roles to which I added the superuser principal. To evaluate those permissions, I am going to use the following script that relies on the results of a system stored procedure called sp_dbfixedrolepermission. (Word of advice, this procedure is deprecated and I will be writing more on that in a future article.)

Looking at the results of that script, one can start to see why we are having so many problems executing a select.

In this result set, one can see that by virtue of the db_owner role and the db_datareader and db_datawriter roles, we have permissions that are being GRANTed. Immediately after that, we hit the DENY. This behavior is to be expected. A DENY permission will override a GRANT except when the principal is a member of the sysadmin server role.

Seeing this in action, the next logical step is to remove the role membership from the two “deny” roles (in this scenario where all are checked) for starters. Once done, go ahead and re-run these tests, you will see that things will work differently. While you are at it, take it a step further and rerun the tests after removing superuser from the db_owner role. Think about it, do you really need to have db_owner along with all of the “grant” related roles given that db_owner gives all of those same roles?

One last bit of homework is for you to run the tests another time from the point of when the database principal was created. Only this last batch of tests, impersonate the database principal instead of the server principal. You can do that by doing the following:

Instead of using the following impersonation statement:

You may be interested by the results.

Recap

I have taken you on a journey through three common mis-configurations with principals and permissions. It really is quite unnecessary to add a principal to the sysadmin fixed server role as well as every fixed database role. Even if the principal is not a member of the sysadmin role, it is foolish to add it to every database role. There just is no good logic in setting up permissions in that manner.

Take a moment to focus and you can say EZ PZ permission squeezee.

Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

Audit All Select Statements

audit_selectLegislation and regulation sometimes dictates that certain activities must be tracked within a database. On occasion, it will be required that queries be audited and tracked to reach compliance with the legislation or regulation. To achieve this compliance, this article will demonstrate how to use Extended Events to audit statements being executed within the database.

Over the course of a few recent articles, I have included little hints here and there alluding to this article. In fact, now would be a good time to review one of these articles in preparation for what I will be sharing today. Take a moment to refresh your memory on this article about finding the right event – here.

Backstory

I have to be a little honest here. Prior to somebody asking how they could possibly achieve a statement audit via extended events, I had not considered it as a tool for the job. I would have relied on Audit (which is Extended Event related), or some home grown set of triggers. In this particular request, Audit was not fulfilling the want and custom triggers was not an option. Another option might have included the purchase of third party software but there are times when budget does not allow for nice expensive shiny software.

So, with a little prodding, I hopped into the metadata and poked around a bit to see what I could come up with to achieve this low-budget audit solution.

Low-Budget Audit

Using the handy scripts I have shown previously (and that I just linked to), I was able to explore the Extended Events metadata and find just what may work properly. The requirements in this case were that it needed to be done in XEvents and that it must capture SELECT statements. To find the events that seemed plausible for this task, I plugged the term “SELECT” into my queries as follows:

From this query, there was really only one event that made any sense for my task – “degree_of_parallelism”. There was another event returned in the result set, but it made no sense to me as a possible candidate event for auditing select statements (it was ucs_transmitter_reclassify). From the results, I then took the keyword associated to degree_of_parallelism to see what else might be pertinent. I plugged that keyword “execution” into the following query:

The results of the previous query will be quite a bit more substantial. Some make sense to include in the audit and some do not. I opted to not included any of the events to keep things as clean and simple as possible.

More Requirements

I now have the base events covered that I want to use for my audit. That said, my base requirements are just not extensive enough. When I audit something, I really want to know who did it, when it was done, where it originated and what was it that they did. The degree_of_parallelism event will capture all of the select statements but it does not meet these additional requirements.

In order to meet the additional requirements, I will attach a handful of actions to the degree_of_parallelism event. The addition of the actions will provide all of the data I want and need. Combine the event and actions together, I came up with the following session definition.

This is a very simple session to pull together. I have added a few things in on my predicate to help limit the scope of the session to the AdventureWorks2014 database and to make sure I am not trapping events related to the code completion tools shown. Notice here also that there is a potential chance to optimize this session. Can you find it? Hint: It may be in the predicate. Second hint: re-read this article about predicates.

Now the trick to what makes this work to audit only the select statements is right there in the predicate. I have instructed the session to ignore any statement_type that is not a value of 1. As it turns out, statement_type of 1 is a select statement. To see how these values map, here is a quick query and the correlated results.

dop_statementtype

Based on these values, should you want to audit a different query type, just change the predicate to use the map_key value that corresponds to the desired statement type.

Testing

Testing this session is rather simple. Here is a quick and dirty script to help test it.

Conclusion

Building low cost solutions is a common requirement for the data professional. When given the opportunity, try things out to see what you can build to provide the solution. In this case, I have demonstrated how Extended Events, however unlikely a candidate, can provide a working solution to help audit any select statements occurring within your database.

This has been another article in the 60 Days of XE series. If you have missed any of the articles, or just want a refresher, check out the TOC.

SQL Server Principals – Back to Basics

Categories: News, Professional, Security, SSC
Comments: 3 Comments
Published on: January 28, 2016

securedb

Prelude in SQL Minor

Back in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

SQL Server Principals

A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. SQL Server principals come in more than flavor. (This is where a lot of confusion gets introduced.) The different types of principals are database and server. A database principal is also called a database user (sometimes just user). A server principal is also called a server login, or login for short.

Server and database principals are quite a bit different in function and come with different permission sets. The two are sometimes used, in reference, interchangeably, but that is done incorrectly. Today I hope to unravel some of what each is and how to see permissions for each type of principal.

Let’s first look at definitions for each type of principal and how to query for basic information about each principal (e.g. name and creation date).

Principals

In general principals are entities to which permissions are granted. These entities can request access or resources. As mentioned already, these principals can be scoped to different levels. These levels as mentioned include database and server.

Server Principals

Server Principals are the kind of principals that include logins and server roles. You may be familiar with some of the server roles:

  • public
  • sysadmin
  • securityadmin
  • serveradmin
  • setupadmin
  • processadmin
  • diskadmin
  • dbcreator
  • bulkadmin

The logins can be created from a windows login or group, or be created as a SQL Login. These principals can also include custom created server roles (in addition to the system created server roles already listed). Once a principal is created, permissions may be granted to the principal. When these permissions are granted, then when the principal attempts to request a resource (related to the permission), to perform a task, the principal can complete that task.

What permissions can be granted to a principal at the server scope? A list of permissions can be created via the following query:

And a sample of the results could look like this:

server permissions

An interesting note here is in the red highlighted permissions. If you look at the documentation for server permissions you will not find those two permissions (at least not as of this writing).

From the permissions returned by the query, you will see that these are all permissions related to server administration type of tasks. Note that these permissions do not grant the ability to do the type of tasks attributed to database types of actions. For instance, the server permissions do not grant the explicit permission to create a reference, execute a procedure or create a table within a database. These are all permissions reserved for the database scope.

Database Principals

Database principals are the type of principals scoped to the database level. These principals will request resources from the database and depending on permissions granted to the principal be able to perform various tasks within the database. The types of database principals include database roles, application roles, and database users. SQL Server Logins can be mapped to a database user and thereby be granted access to the database as the database principal.

Since a database principal can include the database roles, here is a list of the potential database roles:

  • public
  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

These roles can significantly simplify security management within the database. One can easily assign a database principal to be a member of the db_backupoperator role and thus grant that principal the ability to backup the database without much more need to grant or deny permissions.

Considering the ease of role management, one thing that bugs me and that I see frequently is the addition of a database user to every database role in all databases. It makes little sense to add a user to the db_owner role and then also to the db_datareader and db_denydatareader roles. For one, db_owner already has the ability to read from the tables thus negating db_datareader. Consider the db_denydatareader now – it is opposite to db_datareader. Why try to grant and deny read access to the same user? It makes no sense at all.

The next thing that pains me about these roles is the public role. I have written about the public role previously, but it needs stating again. Do not grant any additional permissions to the public role. This is like enabling the guest user and opening up the database to everybody. I have seen a rash of granting “alter server state” and “view server state” to the public server level role and it is painful to see. The same advice applies to the public role whether it is at the server or database scope.

Once a database principal has been created, it is time to proceed to giving the principal the necessary access. Here are some of the possible permissions that can be granted (along with a query to find even more):

db_permissions

Within these results, it is apparent that a database principal can be granted the ability to perform backups, create procedures, execute procedures and even create encryption keys. Between the server scope and the database scope, there is decent level of granularity to control access and resources within the database instance.

Finding Permissions for Each Principal

It is not uncommon to need to know and report on who has been granted what level of permissions within the database environment. If you have been with the database since inception to conception, you probably have documentation on every permission that has been granted. If you inherit a database, your odds on good documentation about the permissions is probably significantly lower. I have shared a comprehensive script previously to show all of the permissions. Sometimes a little less info is more than adequate for the current needs.

Here is a quick alternative with just a little less info to provide database permissions and server permissions based on the input of a specific list of users and databases.

Now, this script does require the use of a function to split strings. The one I am referencing was written by Jeff Moden and can be found here. In this example, I am looking at a few test principals that I created – testuser, Phantom, Gargouille and Garguoille (which is invalid). Running the script, I would receive results such as the following:

audit_output2

This is a pretty quick running script to gather report worthy data on principals and permissions.

All about the Change

Comments: 1 Comment
Published on: January 12, 2016

TSQL2sDayThe second Tuesday of January 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 74th edition of this monthly blog party. This month the host is Robert Davis (blog | twitter) and he has asked us to “Be the change”. Whether the inspiration for this topic is the new year and resolutions, or Ghandi (you must be the change), or CaddyShack (be the ball), we will be discussing “Change.”

Specifically, Robert requested that we discuss data changes and anything relating to data changes. Well, I am going to take that “anything” literally and stretch the definition of changing data just a bit. It will all make sense by the end (I hope).

Ch-ch-changes

Changes happen on a constant basis within a database. Data will more than likely be blackbox2changing. Yes, there are some exceptions to that, but the expectation that data is changing is not an unreal expectation.

Where that expectation becomes unwanted is when we start talking about the data that helps drive the configuration of the server. Ok, technically that is a setting or configuration option or a button, knob, whirlygig or thingamajig. Seldom do we really think about these settings as data. Think about it for a moment though. We can certainly derive some data about these changes (if these settings themselves are not actually data).

So, while you may call it settings changes, I will still be capturing data about the changes. Good? Good! Another term for this is auditing. And auditing applies to all levels including ETL processes and data changes etc. By that fortune, I just covered the topic again – tangentially.

How does one audit configuration changes? Well, there are a few different methods to do this. One could use a server side trace, SQL audit, Extended Events or (if somebody wants to) a custom solution not involving any of those using some sort of variation of tsql and error log monitoring. The point is, there are options. I have discussed a few options for the custom solution path as well as (recently published article using…) the default trace path. Today I will dive into what it looks like via SQL Audit.

When creating an audit to figure out what changes are occurring within the instance, one would need to utilize the SERVER_OPERATION_GROUP action audit group. This action group provides auditing of the following types of events:

  • Administer Bulk Operations
  • Alter Settings
  • Alter Resources
  • Authenticate
  • External Access
  • Alter Server State
  • Unsafe Assembly
  • Alter Connection
  • Alter Resource Governor
  • Use Any Workload Group
  • View Server State

From this group of events, we can guess at the types of actions that might trigger one of these events to fire for the audit. Some of the possible actions would be:

Action Example
Issue a bulk administration command BULK INSERT TestDB.dbo.Test1
FROM ‘c:\database\test1.txt’;
Issue an alter connection command KILL 66
Issue an alter resources command CREATE RESOURCE POOL PrimaryServerPool
WITH {}
Issue an alter server state command DBCC FREEPROCCACHE
Issue an alter server settings command Perform sp_configure with reconfigure
Issue a view server state command

SELECT *

FROM sys.dm_xe_session_targets

Issue an external access assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = EXTERNAL_ACCESS;
Issue an unsafe assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = UNSAFE;
Issue an alter resource governor command ALTER RESOURCE GOVERNOR DISABLE
Authenticate see view server state vsst type occurs for auth events
Use any workload group See Resource Governor

This is quite a bit of interesting information. All of these events can be audited from the same audit group. The interesting ones of this bunch are the ones that indicate some sort of change has occurred. These happen to be all but the “Authenticate”, “View Server State” and “Use any workload Group” events even though these events may be stretched to say something has changed with them as well.

With all of that in mind, I find the the “alter server settings” event to be the most problematic. While it does truly capture that something changed, it does not completely reveal to me what was changed – just that a reconfigure occurred. If a server configuration has changed, I can capture the spid and that reconfigure statement – sure. Once that is captured, I now have to do something more to figure out what configuration was “reconfigured”. This is highly frustrating.

Here’s an example from the audit I created:

audit_alterserversettings

This is only a small snippit of the results. I can see who made the configuration change, the time, the spid, the source machine etc. I just miss that nugget that tells me the exact change that was made. At least that is the case with the changes made via sp_configure. There are fixes for that – as previously mentioned.

Here is another bit of a downside. If you have the default trace still running, a lot of this information will be trapped in that trace. Furthermore, some of the events may be duplicated via the object_altered event session (e.g. the resource governor events). What does this really mean? Extra tracing and a bit of extra overhead. It is something to consider. As for the extended events related events and how to do this sort of thing via XE, I will be exploring that further in a future post.

Suffice it to say that, while not a complete solution, the use of SQL Audit can be viable to track the changes that may be occurring within your SQL Server – from a settings point of view.

«page 1 of 3

Calendar
November 2018
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
2627282930  

Welcome , today is Monday, November 19, 2018