The Curiously Large CASE Statement

 

A powerful little expression in SQL Server is the CASE statement. The CASE statement has the ability to “transform” data per business rules to help better understand the data in a given query. In the simplest form, the CASE statement is a statement that evaluates a list of conditions (in order) and returns the first matching result expressions.

Most uses of the CASE statement are benign enough that one wouldn’t even bat an eye at them. Every now and again, along comes a nifty little query that contains over 300 conditions in the CASE statement. Something that extreme certainly deserves a second (third or even fourth) look. In this article, I am going to explore one such CASE statement in an effort to squeeze more efficiency out of it.

300!

300+ conditions may be a bit hard to imagine. I know I had to double take the list more than once. Immediately upon seeing such an awful piece of code (not pulling punches there), I really wanted to dump the conditions/evaluations into a table and join to it. Eventually, I did put it into a table but not before catching some metrics. Let’s take a look at what this thing looked like from the execution plan properties perspective.

In the preceding image, note the query map in the scrollbar. I have the case statement circled in green. Yes – it is really that large. Next, I have “cached plan size”, “estimated number of rows”, and “estimated subtree cost” circled in red. Each of these shows something that is a little alarming to me. First, the plan size is 48MB! Next, the cost of the query is a little elevated at 51 and change. Last is what seems to be a high value for the estimated number of rows to be returned (only ~6k actually get returned).

On the plus side, the query only takes 381ms, or does it? The query actually takes 3 seconds to complete despite the timing in the properties window. Sometimes, we have to take some of these values in the properties window with a grain of salt.

What if I try to run this query on SQL Server 2017 instead of SQL Server 2014? Let’s take a look at what that might look like.

Firstly, we have a few properties that seem to indicate that we have moved in a slightly better direction with this particular query. The cached plan size has dropped all the way down to just 37MB. The estimated cost dropped to 36 and change and the elapsed time dropped to 353ms. Nothing terribly earth shattering there – but just changing to SQL Server 2017, in this case, we see a bit of an improvement.

That said, take a look at the compile memory and compile time. Memory is 30,544kb and the time is 1,324ms. Aha – we see why this query takes 3 seconds on SQL 2014 and roughly 2 seconds on SQL 2017. High compile times. That memory for the compile is too high for my liking as well so let’s go ahead and take a look at why that might be a little high. It’s time to look at the plan XML and the converted tree for the query (because it is fun).

This is where we get a little geeky with the CASE statement. In the execution plan XML (and the converted tree we will see in a bit), every evaluation criteria for the case statement must have some sort of instruction on how and what should be done. In the case of the XML, we have multiple scalar operators to denote the CASE statement and then each of the evaluations for the CASE statement.

Once we enter the CASE statement, the first thing we see is that there is an “OR” operation (circled in red). Then for each of the values to be compared, we will see an “Equality” operation (circled in green) followed by the value to be evaluated. For just the three values I have in the image, notice how that immediately bloats the execution plan XML for this query.

In the preceding image, we see the converted tree for this query. We see the same pattern evolve here. There is a scalar operator for each value to be evaluated. In this image, I just have the equality comparisons. However, there is also an OR operator that is defined further up in this converted tree. Could this possibly be the reason for the bloated compile time and compile memory?

Let’s evolve that query just a bit and use a table and a join for that particular case statement to see what happens.

Immediately, we can see that the cached plan size improves dramatically! The cached plan size is down to 1.1MB. The compile memory is down to ~4MB and the compile time is down to 174ms. These are all significant improvements but now we see an interesting problem with the memory grant. This query is requesting over 400MB for the memory grant. That should be sounding sirens everywhere. Also of note is the estimated number of rows has dropped to ~28k (a significant improvement in estimate). This query is completing in under 1 second at this point compared to the 3 seconds otherwise but that memory grant is worrying me (and hopefully you too).

For giggles, I went ahead and added a columnstore index to this table (for good reasons beyond the scope of this article). Let’s see what that did.

Notice how the memory grant has dropped to just over 28MB? This is an acceptable improvement for this query. Also note the query execution time improved by a decent amount. This query timing for this query is similar with both a warm cache or cold cache at this point.

Remember that note about the XML and converted tree bloat? With the case statement, the query plan was in excess of 37MB and the revised version is sitting at about 1.1MB. Looking at the quantity of text in each, I see 5819 lines of XML in the version with the case statement and only 3755 lines of XML in the table version of the query (and more room for optimization). The converted tree is more dramatic in that the case statement version of the converted tree requires 3157 lines of instruction and the version that uses a table instead only has 1353 lines of instruction.

What does any of that matter? It helps to illustrate that this 300+ condition filled CASE statement is causing bloat and wasting space along with execution resources.

Wrap

If you ever find yourself writing a query that requires an excessive amount of conditions to be evaluated in a CASE statement, you should reconsider the query design. Too many conditions can lead to plan bloat and excessive memory grants. Keep an eye out for stuff like that!

Thanks for reading! This has been a pseudo deep dive into a really ugly CASE statement. If you would like to read other articles that are less of a deep dive, You can read check out the Back to Basics articles here, or the Extended Events articles here.

Defaults In msdb Database

Today is a day to discuss defaults. It started with the day being TSQL Tuesday and having a topic of “Say No to Defaults.” You can read more about that from the invite – here. I already participated in the party but did also want to discuss defaults a little bit more. That said, this article is not participating in the blog party. That would seem a bit silly.

While, this post is not a part of the party, the defaults to be discussed are fairly important. I have seen severe consequences due to these defaults being ignored and not changed. So today, in addition to my earlier article (you can read it here), I implore you to make some fundamental changes to your production servers with regards to various defaults.

A Trio of msdb Defaults

There aren’t really that many defaults within the msdb database that must be changed, are there? I mean, seriously, beyond the defaults that are available to every database, what could possibly be unique to this database that could have a severe consequence?

I am so glad you asked!

The defaults in the msdb database are more about what is missing than what is actually there. By default, this database is missing quite a few things that could be deemed critical to your environment.

Let’s start with an easy one – Indexes

There are a few out there that may disagree, but the proof really is in the effect on performance for backup jobs and such. I have three indexes I like to put on every instance. I have seen the implementation of these indexes aid in improved job times as well as aid in reduced time to “clean” up the database.

Easy enough. These indexes are very straight forward and pretty small in the grand scheme of things. But if the index can help improve performance by a factor of 10, then I am in favor of them (and I have seen that performance gain).

Now that we have some supporting indexes to help a bit with performance, we should take a look at the next item. This one can help with job performance as well as help with keeping the msdb database nice and trim.

Data Pruning

I have walked into client instances that had backup history dating all the way back to 2005 and included two-three full backups a day per database with quarter-hourly log backups. Oh and this was for an instance containing well north of 200 databases. Can you say sluggish backups and sluggish msdb overall?

The fix is very easy! Not only do I recommend pruning the backup history, but also the job history, mail history and maintenance plan history (eew – if you use those things). Think about it – do you really need to know that Job XYZ ran successfully in 2006 and only took 15 seconds? This is 2015 and that kind of data is probably not pertinent at this point.

The pruning of this data is not enabled by default! You have to configure this for each of the servers under your purview. Luckily, this is easy to do!

If you use this code sample, be sure to adjust the number of days shown in the retention to match your specific needs.

Now we have addressed a couple of defaults in msdb that can impact your performance. We are tidying up the database and in a much happier state these days. There is one more default, though, that is really critical to your data’s well being. This one is set within the msdb database but it really is for all of your databases!

Configuring Alerts!

I’m not talking about just any alerts. There are some very specific alerts that really should be configured. These are the alerts that can help you intervene to minimize corruption.

If you haven’t faced a problem with corruption – you will. It is only a matter of time. Corruption happens. When it happens, the earlier one can intervene, usually the better the outcome. Every minute counts, so why not try to reduce that time as much as possible?

This one is not terribly difficult to implement. I happen to have a query ready to go for that as well. All that needs to be done is a minor adjustment to the alert email address:

Wrap

Wow! Now there are three quick defaults that must be changed on every server. These defaults will help improve performance as well as help you stay on top of things when they start to go south (corruption). With timely notifications, and better performance, your servers will be happier, healthier, and longer lasting.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Easy Permissions Audit

Comments: 2 Comments
Published on: January 21, 2019

Something I have written about more than a handful of times is the need to audit. When people think about audits, the first thing that comes to mind is most likely the IRS and taxes. More than taxes are audit-able. Despite that tendency to first think taxes when somebody says “audit”, I am not writing about taxes. I will typically write about the different topics within SQL Server that can be audited. Has Johnny performed a logon to the server? When was the last time the permissions to the database changed? Did somebody change an object? When was the last time stats were updated? How about auditing success and failure of your backups (or all agent jobs for that matter)? Those are the topics I will typically write about. Today, I am going to share a simple method to help perform an easy permissions audit permissions – on a manual basis.

Easy Permissions Audit

As the article title denotes, today I will be discussing a simple way to get quick permissions for various principals. If you are looking for a more comprehensive and human friendly report version, I recommend reading any of my many other articles on the topic such as the following article – here or here. In the second of those links there is a clue as to what tool we will be using in this easy version of the audit. That tool is called sp_helprotect.

The stored procedure sp_helprotect is a system stored procedure from Microsoft that can help divulge permissions for various principals in a simple table result set for you. Bearing in mind that I am keeping this to a simple audit, the examples will be simplistic in nature. Without further ado, here is the easy audit for your permissions.

sp_helprotect

This stored procedure was introduced in SQL Server 2008 and comes with a few parameters to help narrow the results down to a specific principal and even to any object to which that principal may have been granted permissions. Here are those parameters for quick review:

@name = This parameter is to filter your request down to a specific object or a statement that can be executed against that object (e.g. alter, create, drop)

@username = Is the name of the principal for which permissions are returned.

@grantorname = Is the name of the principal that granted permissions.

@permissionarea = This is the group of grant-able permissions. There are two types of groups: object and statement. The default setting here is to return both groups.

The easiest way to use sp_helprotect is as follows:

Do you see how easy that is? This returns the following results for me.

Note from the results that I see results for roles and users for various different objects. This is due to how the procedure was executed – with no parameters. Using no parameters in this query, the default behavior is to return as much information as possible for all objects and principals within the database.

What if I only want the results for a principal named “Gargouille”? I can do that in the following way.

Now, I will receive the following results:

Recap

There are many ways to produce an audit. Today, I have shown how one can produce a permissions audit when in a hurry that will produce a simple result set for database permissions. I want to underscore that this was at the database level and not the server level. While this is an adequate means for a quick peek into some of the objects and granted permissions, I do recommend using one of the other methods I have introduced in the other articles for ongoing complex audits and results that are somewhat prettier and more human friendly to read.

For more articles on audits and auditing check here and here.

Quick Permissions Audit

Whether it is for a client, an audit, or just for good housekeeping, DBAs will often need to figure out who has access to what.  In addition, they may need to know by what means people have access within your databases.

When that need arises, it is frequently adequate to just perform a quick audit and then dive deeper if the quick audit shows anything questionable. Today, I am sharing a means to perform this quick audit. I will start with the logins (server principals), peek at the users (database principals), then the server roles, and wrapping it up with the database roles. What I don’t do is dive into the windows group memberships.  So if a windows group exists within SQL Server as a login, I do not try to figure out who is a member of that role.  That can be obtained, but is a separate discussion.

So, in the vein of a quick permissions audit, I will quickly get to the script to help perform this permissions audit.

As you look through this code, you will notice that I have done a few interesting things.  Here is a quick recap:

  • Build a temp table based on a delimited list of databases input into a variable
  • Build a temp table based on a delimited list of logins input into a variable
  • Used a string splitter known as DelimitedSplit8K by Jeff Moden (google it) or LMGTFY
  • Used FOR XML to build a delimited string of Server Role Permissions
  • Used FOR XML to build a delimited string of Database Role Permissions
  • Used a CURSOR to build a dynamic SQL string to execute for each database

Holy cow! That is a lot of stuff for such a simple quick audit.  A cursor? Seriously? Well yeah! It is OK in this type of scenario to use a cursor.  This is a well-performing cursor for an administrative function. In addition, I chose to use sub-queries to build my delimited string of permissions so I could keep those things together in a single record per login/role.

If I run that query on my instance of SQL Server, I would end up with something that looks like the following.

Now, obviously, unless you had these same databases, logins, and even the same permissions assigned to these logins, your results will be different. So, make sure you go and change those inputs for logins and databases to be queried.

For more articles on audits and auditing check here and here.

ArithAbort and SQL Agent Failures

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: January 7, 2019

ARITHABORT

I was coding along one day, working on rolling out some monitoring for a client—monitoring that I had used for previous clients.  The code was pretty straightforward and addressed a concern for the client.  Having used it for several previous clients, I felt pretty confident in the code.  The main purpose was simply to audit index definition changes.  It was rolled into a stored procedure and designed to be deployed to a “DBA” database.

With confidence pretty steady on this code, I deployed the stored procedure along with all of the supporting tables and structures.  Then I proceeded to deploy the SQL Agent job that controls the execution of the stored procedure.  When I deploy a job I like to test the job to ensure it will work.  As luck would have it, the job failed on the first test run.  I instantly became baffled.  Here is the error message.

Job Name DBA – Track Index Changes
Step Name exec proc
Duration 00:00:08
Sql Severity 16
Sql Message ID 1934
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0

Message
Executed as user: sa.

INSERT failed because the following SET options have incorrect settings: ‘ARITHABORT’.
Verify that SET options are correct for use with
indexed views
and/or indexes on computed columns
and/or filtered indexes
and/or query notifications
and/or XML data type methods
and/or spatial index operations.
[SQLSTATE 42000] (Error 1934). The step failed.

Immediately I started looking at my creation scripts for the tables.  Nope, no XML indexes, no spatial indexes, no filtered indexes, no indexes on computed columns (not even any computed columns),  and no query notifications.

Next I started checking the database settings.  Maybe the vendor for the application this client bought had set something for all of the databases regarding ARITHABORT.

Querying sys.databases, we could easily see that ARITHABORT is not enabled at the database level (just like previous implementations).  Well, this is a minor head scratcher at this point.  The code works in other environments, the database setting is the same as other environments.  I guess I could try setting ARITHABORT within the stored procedure and then re-test.

 

When I add the last line, “SET ARITHABORT ON;” to this stored procedure and then rerun the job it runs without any error.  It is a simple fix but the story doesn’t end there.

After making that change, I decided to go another round with the stored procedure and the ARITHABORT setting.  I removed it in the next round and decided to test the stored procedure directly.  Running the stored procedure in Management Studio with or without the ARITHABORT setting produces the same result.  That result is that both work as desired without any error.  For giggles, I ran the job again and discovered that the job still fails.  In the end, it appears to be something that the SQL Agent is setting as a part of its connection back to the database in this case.

In addition to this minor nuisance, you saw that the error outlines several possible causes for failure with regards to ARITHABORT.  One that I found that can be of big concern is with filtered indexes.  Check your connection settings from your application when dealing with any filtered indexes.  Filtered Indexes have produced this error in quite a few cases I have been asked to help fix.  Just a thought for something you should monitor and check should you run into this error or if you are considering the use of filtered indexes.

Putting a Bow on it…

In conclusion, this can be a short termed head scratcher.  Pay close attention to what has changed in the environment.  Test alternatives.  And check those connection strings.

If you are curious, there are a few ways to check connection string settings. One of my favorites is with Extended Events. To read more about Extended Events, I recommend this resource – here. In addition, I recommend checking out some of my other articles showing basics in troubleshooting which can be found here.

Encrypting a Database Never Finishes

Categories: News, Professional, Security, SSC
Comments: No Comments
Published on: December 20, 2018

There is plenty of legislation and regulation in place these days that strongly suggest the encryption of data within a database.  In SQL Server, we have the ability to comply with these regulations in a couple of different ways.  We can choose to encrypt at the column level or we can choose to encrypt the entire database using TDE (transparent data encryption).

Given these opportunities and methods of encrypting a database and protecting data, some clients will want to comply with the regulation and move forward with encryption.  Every now and again, the client might run into something they can enable themselves.  TDE happens to be one of those things.  Enabling TDE is pretty easy.  Sometimes things run very smoothly and sometimes there is a problem.  Unfortunately, the problem may not always be immediately evident.

This is one of those times.  If you enable a setting in a database, you would expect it to be in effect and to be working.  Sometimes, with TDE, that change doesn’t take effect. What actually happens is the switch is thrown, the database tries to comply, but something prevents the database from completing the directive it was just given.  As luck would have it, I had the opportunity to assist recently with such a case.

Infinitely Encrypting a Database

Before diving too far down into the problem, let’s take a look at how we can gauge and measure the encryption progress.  Microsoft has provided a dynamic management view to see and track the status and progress of encryption that we have tried to enable.  This view is sys.dm_database_encryption_keys.  Within that view there are a few important and useful fields.  One field is the encryption thumbprint, another is the encryption state, and another tells us the percent that has been completed.

With a little tsql from the toolbelt, we can pull back the essential information to monitor the encryption progress.

You’ll notice that I have a case statement in there to transform the encryption_state values into a human friendly form.  Sadly, the human friendly values are not stored in the database and are only documented in MSDN / BOL.

Now on to the problem at hand—the encryption that starts but never ends.  Given that I have a database named “Published”, I can enable encryption with the following script.

After issuing the encryption command, I can then pull up my “status” script and see how things are rolling.

In this example, and for the sake of this article, this database will remain at 0% complete for days and months if allowed to remain that way.  Notice that it still thinks it is progressing, but nothing is actually getting done.  This kind of result can also be seen after a database reaches 99% complete.  At some point, something occurred that caused grief for the encrypting process.  As soon as that happens, the report will revert back to 0% complete and just stall indefinitely.  The types of things that can cause this vary from corruption to index rebuilds occurring at the same time as the attempt at encryption.

When a stall such as this occurs, the fix could be as simple as restarting the encryption with the statement previously used.  Here that is again (bearing in mind to substitute the appropriate database name for the one I used in this demo):

In some cases, that won’t work.  And sometimes you just may want to proceed very cautiously.  In those events, you can get your database back on track to being encrypted by using the following:

The use of this trace flag forces the page scan, used to encrypt the database, to stop (even though it already appears to have stopped).

If I check my encryption status again, I should see the percentages growing until 100% is achieved and eventually the state that declares the database as being encrypted.  This is demonstrated in the following three screenshots.

And finally we have an encrypted database.

Conclusion

Transparent Data Encryption is useful in helping get a database into compliance with several regulations.  It is also fairly easy to implement.  Occasionally, it may only appear to have completed successfully.  When the SQL statement completes, TDE is not entirely in place.  Though it is “enabled”, it may still be waiting for the encryption phase to complete.  Once the statement to encrypt the database has been issued, it is important to follow up with a bit of monitoring to ensure the database progresses to the encrypted state.  If that doesn’t happen, you could end up in a state where you think the database is encrypted and it actually just stalled.

If you enjoy reading about security and encryption, check out these articles related to security and audits.

Synonyms in SQL Server – Good and Bad

When SQL Server 2005 was released, a nifty little feature was included called synonyms.  Despite being around since SQL Server 2005, I think this feature is often under-utilized or, more importantly, it is implemented in a very bad way.

Today I want to share a couple of examples.  We will take a look at examples of both good and bad implementations of synonyms.

First, let’s take a look at the syntax for creating a synonym.  Per BOL (and for the fun of it, here is the 2005 link).

So a sample implementation of a Synonym could be something like the following.

Before we delve into that sample synonym, lets look at an example of a really bad implementation.

The BAD

While working with a client, I received a request to look into why a linked server query was failing.  (This isn’t necessarily the bad, just hang in there for a bit.)  The message was something like this:

The OLE DB provider “SQLNCLI10” for linked server “blahblahblah” indicates that either the object has no columns or the current user does not have permissions on that object.

The error message seems at least semi-descriptive and gives a starting point.  In this case, I decided to verify the linked server was created properly, verified that the permissions were done properly and even tested the linked server.  On the source (linked) and destination server (let’s call the Source server ServerA and the Destination server we will call ServerB), I verified that permissions were in place for each database to be touched.  Still no dice!

Well, let’s go take a look and see if that referenced table actually exists.  It did not!  Does it exist as a view?  It did not!  Alas, the table existed as a synonym.  This is where it gets wonky.  In looking at the definition of the synonym, I found that the table defined in the synonym had a linked server table as its source.  To top things off, the linked server was back on the originating server that was coming across the link in the first place.  So yes, that would be ServerB initiated a query against ServerA to pull data back to ServerB.  But the data needed (as defined by the vendor) was available on ServerA – supposedly.  Reality had that data actually sitting on ServerB the whole time.

At any rate, thanks to having a synonym for each and every table sitting on ServerA that referenced a table across a linked server on ServerB, we had mass confusion.  In the end, the query was far simpler to execute by just pulling it from the originating query server (ServerB).

This implementation of a synonym was not the best.  All it did was cause confusion, create documentation inaccuracies and delay the developer from accomplishing her task.  Do you really need 1000s of synonyms in your database?  Do you need all of them going across a linked server?  If you do, did you ever think about the potential for performance issues?  (The vendor in this case used those linked servers and synonyms to perform a data conversion that took 36 hrs each time for a rather small dataset – ouch!!!!).

On the other Hand

Imagine, if you will, two databases sitting on the same box.  One database will be for your user data, and the other for “staging” data for processes such as those related to ETL.  Imagine further that, for some inane reason, the naming standard of your databases must include the suffix denoting the environment of the database.

Now picture an SSIS package that must utilize tables from both databases in data sources somehow.  At least one of the tables has millions of records.  And both of the tables have nearly 100 columns.  Even just pulling in the minimum amount of data using a query from each source can cause memory issues.  Two separate data sources means you will likely have a sort transformation (for each source) as well as a join transformation.

Trying to reduce the amount of data in the larger table source could be done via TSQL.  But to reference a database in one environment versus another environment means a code change with each deployment (due to the TSQL – think three part naming).  So you have been hampered by the environment.  Or have you?

By using a synonym in this situation, the data can be joined in a tsql data source by referencing that synonym.  Let’s look back at the sample synonym posted earlier in this article.

You can see that this synonym follows the same sort of naming standards as was just laid out in the preceding scenario.  If I create a synonym in each environment by the same name, and referencing the appropriate environment named database, I have just opened up a performance tweak for my SSIS datasource.

By implementing this slight tweak, I have been able to gain a 10x performance improvement in package performance.  I am now requiring SSIS to ingest fewer records and thus chew up less memory.  Fewer transformations are required and the package can just fly into the required transformations, rather than tinkering around with the transformations needed to just get the data into a usable state for those transformations.

There are other benefits within SSIS to using synonyms for databases on the same server as well.  Especially when dealing with this kind of naming standard that requires the databases to be named differently in each environment.

Conclusion

How you use a synonym can be a huge asset or it can be a significant dampener to performance.  There are benefits and uses for these nifty little things.  Check them out and let us know how you have been able to put synonyms to use to benefit you.

Check out some of these other articles on synonyms here and here.

Maintenance Plan Owner – Back to Basics

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.

 

Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL Server, we can use the following to find that I am now the owner of that maintenance plan we just created.

Notice that in this query, I delve out to the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages tables respective to version of SQL Server. I also am running a dynamic SQL query to support both views dependent on version of SQL Server.  I figured this might be a tad more helpful than the previous version here. This query would yield the following result set for that new “Maintenance Plan” that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

SQL 2008

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

SHUTDOWN SQL Server

Categories: News, Professional, Security, SSC
Comments: 4 Comments
Published on: December 3, 2018

Recently a friend by the name of Chris Bell (blog | twitter) wrote about an easy way to disrupt SQL Server.  That disruption comes in the form of the SHUTDOWN TSQL command.  You can read what Chris wrote from his article here.

Granted, you do need to have elevated permissions such as sysadmin, serveradmin or securityadmin.  I include securityadmin even though Chris did not because a securityadmin can create an account and grant that account sysadmin permissions.  And in Chris’ article he only discusses the threat to the SQL Server process. When I read the article, I wondered to myself if the threat stopped there. I also wondered to what extent could I cause disruption. Yes, the gremlin in me did start to come out.

When I say I was curious what level of disruption could be caused, I really wanted to know if I could reboot the server from within SQL Server or even if I could simply shut down the entire server. Well, you can certainly bounce the server from within a TSQL script – if you have adequate permissions (or know how to elevate your permissions).

The first step is rather simple: check to see if xp_cmdshell is enabled. If it is not, then enable it.

Now that xp_cmdshell is enabled, let’s bounce the server.

Now this code will not work on your server as-is. I have coded it to be my sandbox server. That said, if the server name matches you should get a prompt informing you that SQL Server will shut down. And that shutdown will happen after 60 seconds.

If you happen to see a message reporting “Access Denied”, that is also easy to circumvent—well, at least up to SQL Server 2016. Prior to SQL Server 2016, I could utilize xp_cmdshell to also add the service account (let’s say it is NT Service\MSSQLSERVER) through the use of net localgroup /add.  However, in SQL Server 2016, you will continue to get an access denied message (even if you try to use a proxy account). Here is an example of that  exploit as well (prior to 2016):

Let’s say you have done your duty and changed the service account off of the default local service to a domain account or a local account (not nt service), but you decided to add that account to the local Administrators group. You have actually opened yourself up to plenty of other problems. Even in SQL 2016, if the service account for SQL Server is in the Local Admins group, then the shutdown SQL Server example shared here will work and force a shutdown.

So, in the end, please be mindful of the service account in use. And be mindful of the level of permissions that service account has been granted at the OS level. If you are not careful, you could have some rogue developer or intruder shutting you down before you know it.

Now would be a good time to go and review your security. Check for users with too much access. Here are some articles that can help with that (here and here).

Refresh SQL Modules

As is true in most facets of life, things tend to get stale and old. Sometimes this staleness can be visibly represented as the wrinkles on your face. Other times, as with SQL Server, it may mean that a stored procedure or view stops working.

Unfortunately, when it comes to SQL Server, the symptoms are not as readily visible as aging lines would be. In SQL Server, the symptoms may be as random and difficult to recognize as a stored procedure just suddenly stops working.

What makes this even more difficult is that the stored procedure (that is no longer working) may appear to be entirely unchanged. These failures can occur when other changes have occurred to the system as well as when no change has occurred. Imagine the joys you could experience while trying to troubleshoot this kind of problem.

If you were keen, you would probably implement any number of Extended Event Sessions to help troubleshoot the issue. If you are unfamiliar, you might want a refresher course on how to setup an XE session which you could read from one of these articles, here and here.

If you are curious, there are more XE articles on my blog – here.

Metadata

There are many possible causes for the metadata to become fouled up inside the SQL modules on the database server. Invariably, according to the interested parties, nothing has changed in months! Right or wrong, there is still a problem to fix. Fortunately, the problem is easy enough to fix.

Sure, all of the modules could be re-deployed from source control. But, given that the modules don’t to appear to have lost any of the code within them, re-deploying code seems to be a bit of overkill. We don’t need to go that far. We can simply run sp_refreshsqlmodule for the stored modules (procs, triggers, functions, views etc). This would be pretty simple with the following script:

In this script, I am just going to refresh all modules in the database (Adventureworks2014 in this case). I could modify the script to trim it down to a specific module or set of modules. Instead, I leave that as homework for you.

That said, I do have a slightly different alternative that looks for any tables changed on a specific date. After finding those changed tables, then I update all modules related to the changed table(s). Here is how that would look:

From here, suppose you want to check for a range of dates where tables were modified. Again, I will leave that as homework for you.

After executing these scripts, I will have an output similar to the following:

This leaves you with one more step to perform – copy the values from the RefreshStmt column to a new query window and execute the statements.

The Wrap

In this article I showed a very simple solution to a problem that plagues some environments – out of date metadata. Forcing a module refresh will often resolve these types of issues.

With such a beautifully simple solution, I could have grouped this article into my “Back to Basics” series, but I did not. That said, there are some pretty interesting articles in the series including (but not limited to): Backups, backup history and user logins. Check them out!

«page 1 of 4

Calendar
May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Sunday, May 26, 2019