The Curiously Large CASE Statement

 

A powerful little expression in SQL Server is the CASE statement. The CASE statement has the ability to “transform” data per business rules to help better understand the data in a given query. In the simplest form, the CASE statement is a statement that evaluates a list of conditions (in order) and returns the first matching result expressions.

Most uses of the CASE statement are benign enough that one wouldn’t even bat an eye at them. Every now and again, along comes a nifty little query that contains over 300 conditions in the CASE statement. Something that extreme certainly deserves a second (third or even fourth) look. In this article, I am going to explore one such CASE statement in an effort to squeeze more efficiency out of it.

300!

300+ conditions may be a bit hard to imagine. I know I had to double take the list more than once. Immediately upon seeing such an awful piece of code (not pulling punches there), I really wanted to dump the conditions/evaluations into a table and join to it. Eventually, I did put it into a table but not before catching some metrics. Let’s take a look at what this thing looked like from the execution plan properties perspective.

In the preceding image, note the query map in the scrollbar. I have the case statement circled in green. Yes – it is really that large. Next, I have “cached plan size”, “estimated number of rows”, and “estimated subtree cost” circled in red. Each of these shows something that is a little alarming to me. First, the plan size is 48MB! Next, the cost of the query is a little elevated at 51 and change. Last is what seems to be a high value for the estimated number of rows to be returned (only ~6k actually get returned).

On the plus side, the query only takes 381ms, or does it? The query actually takes 3 seconds to complete despite the timing in the properties window. Sometimes, we have to take some of these values in the properties window with a grain of salt.

What if I try to run this query on SQL Server 2017 instead of SQL Server 2014? Let’s take a look at what that might look like.

Firstly, we have a few properties that seem to indicate that we have moved in a slightly better direction with this particular query. The cached plan size has dropped all the way down to just 37MB. The estimated cost dropped to 36 and change and the elapsed time dropped to 353ms. Nothing terribly earth shattering there – but just changing to SQL Server 2017, in this case, we see a bit of an improvement.

That said, take a look at the compile memory and compile time. Memory is 30,544kb and the time is 1,324ms. Aha – we see why this query takes 3 seconds on SQL 2014 and roughly 2 seconds on SQL 2017. High compile times. That memory for the compile is too high for my liking as well so let’s go ahead and take a look at why that might be a little high. It’s time to look at the plan XML and the converted tree for the query (because it is fun).

This is where we get a little geeky with the CASE statement. In the execution plan XML (and the converted tree we will see in a bit), every evaluation criteria for the case statement must have some sort of instruction on how and what should be done. In the case of the XML, we have multiple scalar operators to denote the CASE statement and then each of the evaluations for the CASE statement.

Once we enter the CASE statement, the first thing we see is that there is an “OR” operation (circled in red). Then for each of the values to be compared, we will see an “Equality” operation (circled in green) followed by the value to be evaluated. For just the three values I have in the image, notice how that immediately bloats the execution plan XML for this query.

In the preceding image, we see the converted tree for this query. We see the same pattern evolve here. There is a scalar operator for each value to be evaluated. In this image, I just have the equality comparisons. However, there is also an OR operator that is defined further up in this converted tree. Could this possibly be the reason for the bloated compile time and compile memory?

Let’s evolve that query just a bit and use a table and a join for that particular case statement to see what happens.

Immediately, we can see that the cached plan size improves dramatically! The cached plan size is down to 1.1MB. The compile memory is down to ~4MB and the compile time is down to 174ms. These are all significant improvements but now we see an interesting problem with the memory grant. This query is requesting over 400MB for the memory grant. That should be sounding sirens everywhere. Also of note is the estimated number of rows has dropped to ~28k (a significant improvement in estimate). This query is completing in under 1 second at this point compared to the 3 seconds otherwise but that memory grant is worrying me (and hopefully you too).

For giggles, I went ahead and added a columnstore index to this table (for good reasons beyond the scope of this article). Let’s see what that did.

Notice how the memory grant has dropped to just over 28MB? This is an acceptable improvement for this query. Also note the query execution time improved by a decent amount. This query timing for this query is similar with both a warm cache or cold cache at this point.

Remember that note about the XML and converted tree bloat? With the case statement, the query plan was in excess of 37MB and the revised version is sitting at about 1.1MB. Looking at the quantity of text in each, I see 5819 lines of XML in the version with the case statement and only 3755 lines of XML in the table version of the query (and more room for optimization). The converted tree is more dramatic in that the case statement version of the converted tree requires 3157 lines of instruction and the version that uses a table instead only has 1353 lines of instruction.

What does any of that matter? It helps to illustrate that this 300+ condition filled CASE statement is causing bloat and wasting space along with execution resources.

Wrap

If you ever find yourself writing a query that requires an excessive amount of conditions to be evaluated in a CASE statement, you should reconsider the query design. Too many conditions can lead to plan bloat and excessive memory grants. Keep an eye out for stuff like that!

Thanks for reading! This has been a pseudo deep dive into a really ugly CASE statement. If you would like to read other articles that are less of a deep dive, You can read check out the Back to Basics articles here, or the Extended Events articles here.

Disk Space and SQL Server

Disk Space in SQL Server

One of the frequently required job functions of the database administrator is to track disk space consumption. Whether this requirement comes from management or from a learning opportunity after a production outage, the need exists.

As a hard working DBA, you want to make sure you hit all of the notes to make management sing your praises. Knowing just when the database may fill the drives and prevent a production outage just happens to be one of those sharp notes that could result in a raise and management singing hallelujah. The problem is, how do you do it from within SQL Server? You are just a DBA after all and the disk system is not your domain, right?

Trying to figure it out, you come across a pretty cool function within SQL Server. The name of the function is sys.dm_os_volume_stats. Bonus! This is an excellent discovery, right? Let’s see just how it might work. First a sample query:

If I run that on my local system, I might end up with something that looks like the following:

Looking at the image you may be wondering to yourself right now why I have highlighted a couple of things. You may also be wondering why I used the word “might” in the previous paragraph as well. The reasoning will become more evident as we progress. For now, you have resolved to continue testing the script so execute it again and end up with something that may look like the following (for the same server):

Whoa! What just happened there? Why are there two listings for the C: drive? Why does each register a different value for the FreeSpace column? In addition without any additional usage on the C drive (as verified through other tools) the FreeSpace is changing between executions as well as within the same execution. This is problematic, so you continue testing:

And yet again!

This can’t be correct, can it? Just for giggles let’s modify it just a bit to see if there are any additional clues. Using the following changed script, hopefully a clue will help shed some light on this:

This script yields the following potential results:

Look at the different highlighted areas! There are three different values for FreeSpace for the C: drive in this particular execution. The case of the C: drive plays no role in whether the value is recorded differently or not. This seems to be more of a bug within the dynamic management function. From execution to execution, using this particular method, one could end up with duplicate entries but distinct values. The sort of the execution could be returned differently (though we could fix that).

All of these tests were run on my local machine and I really do only have one C: drive. I should never receive multiple entries back for any drive. If using this particular DMF to track space usage, it could be somewhat problematic if the duplicate drive data pops up. How do we get around it, you ask? Here is another example that I have that has not yet produced this duplication:

Using this version of the script is not terribly more complex, but it will prove to be more reliable. You can see I used some CTEs to provide a little trickery and ensure that I limit my results. What if it is a mount point or a non-standard drive letter? I have not tested that. Let me know how that goes. As you can see, I am restricting the drive selection by using the row_number function against the drive letter.

For alternative reliable methods to find your disk space consumption, I would recommend something different because it is tried and tested. I would recommend using a wmi call to fetch the data. Samples are provided as follows:

Easy peasy, right? Have at it and try tracking your disk space.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here. For other interesting articles about database space – check here or here.

Passion, Challenges, and SQL

Comments: 1 Comment
Published on: February 12, 2019

TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

The Why?

Long time friend Andy Leonard (b | t) invites us this month to do a little checkup on ourselves and talk about the “why” around what we do. This could be a very easy topic for some. Equally, this could be a very difficult topic for those same people at different times in their lives. Thus the problem, the topic is simple in nature but sure requires a firm reflection on self and what you have been doing.

The problem for me is less about the “why” behind what I do, and more about how to stretch it out into something more than a few sentences. Think! Think! Think!

Challenges

One of my biggest reasons why I do what I do, boils down to the challenges that I frequently get to encounter. There is a wild satisfaction to working on a very difficult and challenging task, product, tool, profession, skill, etc. This satisfaction often involves reward and a sense of accomplishment.

The challenge can be anything from how to effectively communicate with a difficult person, a tough to find internals problem in SQL Server that could be causing a performance issue, or taking over a project and turning it back from the edge of failure and onto a track of success. Sometimes, the challenge may be as simple as converting a pathetic cursor into a set based approach and gaining an improvement of 100x in performance.

I really do enjoy some of the puzzles (challenges) that I get to work on routinely. This gives me an opportunity to improve my skillset as well as continue to learn. Being able to continually improve is a great motivation for me. The frequent challenges and continual opportunity to learn presents a great opportunity to evolve ones self and career. In a constantly changing world, being able to naturally and easily evolve your personal career is a bonus!

Passion

“Do what you love and you will never work a day in your life.” This is a common saying in the United States. Agree or disagree – there is some truth to it. Being able to do something one loves makes the really hard days a lot easier. Knowing, I may be able to solve a complex problem makes it easier to face the day.

I really enjoy the opportunity to face difficult challenges and resolve those challenges. The passion to solve these puzzles with data doesn’t end there. I also really do enjoy the opportunity to learn which brings up two other challenges that help me learn: speaking and writing.

By putting myself out there regularly to speak and write, I am becoming a better technical person. I am becoming better equipped to solve many of the puzzles I face. Those are great benefits. That said, I don’t feel I could get out there and talk about something about which I wasn’t passionate. I have learned to become passionate about writing and speaking – though I still have plenty of room for improvement (just as I do in my quest to become a good DBA).

TSQL2sDay150x150Wrapping it Up

I really do enjoy the challenges I get to face on a frequent basis in the world of data. This is the big “WHY” for me to continue my progress in this career.

Find something you are passionate about and strive to envelop your career with as many opportunities to do that thing. If that means accepting some less wanted tasks in order to do more of the thing you love, it could very well be worth it!

Interview Trick Questions

Today, I am diverging from the more technical posts that I routinely share. Instead, as the title suggests, I want to dive into a something a little more fun.

Anybody that has interviewed for a job has most likely run into the trick question. Some interviewers like to throw out multiple trick questions all in an effort to trip up the candidate and get the candidate to doubt him/her self. Sure, there can be some benefit to throwing out a trick question or four. One such benefit would be to see how the candidate performs under pressure (see them squirm).

The downside to throwing out trick questions, in my opinion, would be that you can turn a serious candidate into an uninterested candidate. So, when throwing out the tricks, tread carefully.

Let’s take a look at an interview trick question candidate. This is a more technical question and is designed to make you think a little bit. Before reading on to see the answer, I implore that you try to answer the question for yourself legitimately.

How can you insert data into two tables using a single statement without the use of triggers, service broker or some other behind-the-scenes feature?

Are you thinking about it?

Do you have your answer yet?

Now that you have your answer, go ahead and continue reading.

Is your answer to this question something along the lines of “You can’t do that and this is just a trick question”?

Well, honestly, it is a bit of a trick question. But I assure you, you can certainly perform an insert into multiple tables from a single statement. Here is one such setup that demonstrates how you can do this:

Do you see how I was able to perform that insert into multiple tables? The trick is in using the OUTPUT clause. This little feature in SQL Server can be of great use for things such as building multiple staging tables during an ETL process.

Here is that little trick again just to highlight it.

Conclusion

There are cases when an interview trick question is suitable. It is when the purported question is truly more technical than trick and is really trying to evaluate your depth and knowledge of SQL Server. The puzzle during the interview boils down to figuring out when it is a trick and when it might not be. Then from there, work your way through possible solutions. But don’t be afraid to admit when you haven’t got a clue. That will be far more impressive than to try and flim-flam the interviewer.

I invite you to share your trick questions in the comments.  Also, how did you solve this particular trick question?

 

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Defaults In msdb Database

Today is a day to discuss defaults. It started with the day being TSQL Tuesday and having a topic of “Say No to Defaults.” You can read more about that from the invite – here. I already participated in the party but did also want to discuss defaults a little bit more. That said, this article is not participating in the blog party. That would seem a bit silly.

While, this post is not a part of the party, the defaults to be discussed are fairly important. I have seen severe consequences due to these defaults being ignored and not changed. So today, in addition to my earlier article (you can read it here), I implore you to make some fundamental changes to your production servers with regards to various defaults.

A Trio of msdb Defaults

There aren’t really that many defaults within the msdb database that must be changed, are there? I mean, seriously, beyond the defaults that are available to every database, what could possibly be unique to this database that could have a severe consequence?

I am so glad you asked!

The defaults in the msdb database are more about what is missing than what is actually there. By default, this database is missing quite a few things that could be deemed critical to your environment.

Let’s start with an easy one – Indexes

There are a few out there that may disagree, but the proof really is in the effect on performance for backup jobs and such. I have three indexes I like to put on every instance. I have seen the implementation of these indexes aid in improved job times as well as aid in reduced time to “clean” up the database.

Easy enough. These indexes are very straight forward and pretty small in the grand scheme of things. But if the index can help improve performance by a factor of 10, then I am in favor of them (and I have seen that performance gain).

Now that we have some supporting indexes to help a bit with performance, we should take a look at the next item. This one can help with job performance as well as help with keeping the msdb database nice and trim.

Data Pruning

I have walked into client instances that had backup history dating all the way back to 2005 and included two-three full backups a day per database with quarter-hourly log backups. Oh and this was for an instance containing well north of 200 databases. Can you say sluggish backups and sluggish msdb overall?

The fix is very easy! Not only do I recommend pruning the backup history, but also the job history, mail history and maintenance plan history (eew – if you use those things). Think about it – do you really need to know that Job XYZ ran successfully in 2006 and only took 15 seconds? This is 2015 and that kind of data is probably not pertinent at this point.

The pruning of this data is not enabled by default! You have to configure this for each of the servers under your purview. Luckily, this is easy to do!

If you use this code sample, be sure to adjust the number of days shown in the retention to match your specific needs.

Now we have addressed a couple of defaults in msdb that can impact your performance. We are tidying up the database and in a much happier state these days. There is one more default, though, that is really critical to your data’s well being. This one is set within the msdb database but it really is for all of your databases!

Configuring Alerts!

I’m not talking about just any alerts. There are some very specific alerts that really should be configured. These are the alerts that can help you intervene to minimize corruption.

If you haven’t faced a problem with corruption – you will. It is only a matter of time. Corruption happens. When it happens, the earlier one can intervene, usually the better the outcome. Every minute counts, so why not try to reduce that time as much as possible?

This one is not terribly difficult to implement. I happen to have a query ready to go for that as well. All that needs to be done is a minor adjustment to the alert email address:

Wrap

Wow! Now there are three quick defaults that must be changed on every server. These defaults will help improve performance as well as help you stay on top of things when they start to go south (corruption). With timely notifications, and better performance, your servers will be happier, healthier, and longer lasting.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Easy Permissions Audit

Comments: 2 Comments
Published on: January 21, 2019

Something I have written about more than a handful of times is the need to audit. When people think about audits, the first thing that comes to mind is most likely the IRS and taxes. More than taxes are audit-able. Despite that tendency to first think taxes when somebody says “audit”, I am not writing about taxes. I will typically write about the different topics within SQL Server that can be audited. Has Johnny performed a logon to the server? When was the last time the permissions to the database changed? Did somebody change an object? When was the last time stats were updated? How about auditing success and failure of your backups (or all agent jobs for that matter)? Those are the topics I will typically write about. Today, I am going to share a simple method to help perform an easy permissions audit permissions – on a manual basis.

Easy Permissions Audit

As the article title denotes, today I will be discussing a simple way to get quick permissions for various principals. If you are looking for a more comprehensive and human friendly report version, I recommend reading any of my many other articles on the topic such as the following article – here or here. In the second of those links there is a clue as to what tool we will be using in this easy version of the audit. That tool is called sp_helprotect.

The stored procedure sp_helprotect is a system stored procedure from Microsoft that can help divulge permissions for various principals in a simple table result set for you. Bearing in mind that I am keeping this to a simple audit, the examples will be simplistic in nature. Without further ado, here is the easy audit for your permissions.

sp_helprotect

This stored procedure was introduced in SQL Server 2008 and comes with a few parameters to help narrow the results down to a specific principal and even to any object to which that principal may have been granted permissions. Here are those parameters for quick review:

@name = This parameter is to filter your request down to a specific object or a statement that can be executed against that object (e.g. alter, create, drop)

@username = Is the name of the principal for which permissions are returned.

@grantorname = Is the name of the principal that granted permissions.

@permissionarea = This is the group of grant-able permissions. There are two types of groups: object and statement. The default setting here is to return both groups.

The easiest way to use sp_helprotect is as follows:

Do you see how easy that is? This returns the following results for me.

Note from the results that I see results for roles and users for various different objects. This is due to how the procedure was executed – with no parameters. Using no parameters in this query, the default behavior is to return as much information as possible for all objects and principals within the database.

What if I only want the results for a principal named “Gargouille”? I can do that in the following way.

Now, I will receive the following results:

Recap

There are many ways to produce an audit. Today, I have shown how one can produce a permissions audit when in a hurry that will produce a simple result set for database permissions. I want to underscore that this was at the database level and not the server level. While this is an adequate means for a quick peek into some of the objects and granted permissions, I do recommend using one of the other methods I have introduced in the other articles for ongoing complex audits and results that are somewhat prettier and more human friendly to read.

For more articles on audits and auditing check here and here.

Audit SQL Agent Jobs

One probably seldom thinks of the SQL Agent jobs scheduled on the SQL Server instance – unless they fail. What if the job failed because something was changed in the job? Maybe you knew about the change, maybe you didn’t.

Once upon a time, I was in the position of trying to figure out why a job failed. After a bunch of digging and troubleshooting, it was discovered that the job had changed but nobody knew when or why. Because of that, I was asked to provide a low cost audit solution to try and at least provide answers to the when and who of the change.

Tracking who made a change to an agent job should be a task added to each database professionals checklist / toolbox. Being caught off guard from a change to a system under your purview isn’t necessarily a fun conversation – nor is it pleasant to be the one to find that somebody changed your jobs without notice – two weeks after the fact! Usually, that means that there is little to no information about the change and you find yourself getting frustrated.

To the Rescue

When trying to come up with a low to no-cost solution to provide an audit, Extended Events (XE) is quite often very handy. XE is not the answer to everything, but it does come in handy very often. This is one of those cases where an out of the box solution from XE is pretty handy. Let’s take a look at how a session might be constructed to help track agent job changes.

With this session, I am using degree_of_parallelism as a sort of catch-all in the event that queries that cause a change are not trapped by the other two events (sql_statement_completed and sp_statement_completed). With the degree_of_parallelism event, notice I have a filter to exclude all “Select” statement types. This will trim some of the noise and help track the changes faster.

Looking at data captured by this session, I can expect to see results like the following.

And the degree_of_parallelism event will catch data such as this.

In this example, the deletion of a job was captured by the degree_of_parallelism event. In addition to catching all of the various events that fire as Jobs are being changed and accessed, one will also be able to get a closer look at how SQL Agent runs about its routine.

The Wrap

Extended Events can prove helpful for many additional tasks that may not be thought of on an every day basis. With a little more thought, we can often find a cool solution via Extended Events to help us be better data professionals. In this article, we see one example of that put to use by using XE to audit Agent Job changes.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

For another interesting article about SQL Agent, check this one out!

Quick Permissions Audit

Whether it is for a client, an audit, or just for good housekeeping, DBAs will often need to figure out who has access to what.  In addition, they may need to know by what means people have access within your databases.

When that need arises, it is frequently adequate to just perform a quick audit and then dive deeper if the quick audit shows anything questionable. Today, I am sharing a means to perform this quick audit. I will start with the logins (server principals), peek at the users (database principals), then the server roles, and wrapping it up with the database roles. What I don’t do is dive into the windows group memberships.  So if a windows group exists within SQL Server as a login, I do not try to figure out who is a member of that role.  That can be obtained, but is a separate discussion.

So, in the vein of a quick permissions audit, I will quickly get to the script to help perform this permissions audit.

As you look through this code, you will notice that I have done a few interesting things.  Here is a quick recap:

  • Build a temp table based on a delimited list of databases input into a variable
  • Build a temp table based on a delimited list of logins input into a variable
  • Used a string splitter known as DelimitedSplit8K by Jeff Moden (google it) or LMGTFY
  • Used FOR XML to build a delimited string of Server Role Permissions
  • Used FOR XML to build a delimited string of Database Role Permissions
  • Used a CURSOR to build a dynamic SQL string to execute for each database

Holy cow! That is a lot of stuff for such a simple quick audit.  A cursor? Seriously? Well yeah! It is OK in this type of scenario to use a cursor.  This is a well-performing cursor for an administrative function. In addition, I chose to use sub-queries to build my delimited string of permissions so I could keep those things together in a single record per login/role.

If I run that query on my instance of SQL Server, I would end up with something that looks like the following.

Now, obviously, unless you had these same databases, logins, and even the same permissions assigned to these logins, your results will be different. So, make sure you go and change those inputs for logins and databases to be queried.

For more articles on audits and auditing check here and here.

Automating like an Enterprise DBA

Comments: No Comments
Published on: January 8, 2019

TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

Automation

The theme as chosen by Garry Bargsley (b | t) is about automation. Specifically, Garry has provided two requirements about automation for this month. As is always, there is leeway in a post that participates in TSQL Tuesday.

One of the things that should seem very commonplace to a data professional is the effort to become a lazy DBA. A lazy DBA is not a bad thing. It just means the DBA works hard to automate the repetitive mundane tasks that may be tedious and/or time consuming. Time can always be better spent somewhere else, right?

If you are lacking in any ideas for what can be automated, here are a few TSQL Tuesday roundups from when we have talked about automation previously (yes it is a hot topic – ALWAYS!).

  1. August 2010 – Beach Time – what do you automate to earn beach time?
  2. February 2011 – Automation in SQL Server – Give your best tricks for making your life easier through automation.
  3. January 2014 – Automation – How much of it is the same?
  4. September 2015 – The Enterprise – How does one manage an enterprise of databases?
  5. September 2017 – PowerShell Automation – Find something and automate it.

In the past, I have written about automation a few times. Some of my favorites are automated restores, automation in the cloud, and my poor mans automated audit.

I automate many processes and have automated loads of tasks over the years. You see, automation means I can spend more time doing other tasks that require more time, more thought/concentration, more effort, and frankly more interest. So what have I automated recently that may be different from what I have previously written? This time, I have something that may seem utterly trivial but in the end it is rather tedious and time consuming to manually check over and over and over.

PowerShell

When I automate a task, I generally will try to use the tool that seems the most appropriate for the task: windows scheduler, SQL Agent, TSQL, SSIS, VB, C#  and now I am trying to add PoSh to that list. I don’t believe there is a one size fits all automation tool. Sometimes, one has to be flexible enough to adapt other technologies into the tool-belt.

I have been working with a client to check their servers for SQL Server version, SSMS version, PoSH version and so on. All of this to try and get the appropriate updates installed on the server. Believe it or not, many of their servers were still running PoSH v2 and didn’t have any Service Packs installed for their database servers. OUCH!

Touching every single server (even if it is only 10 servers) is far too tedious and error prone. So, I spent a little time klooging together with my neanderthal level PoSH skills and found a way to retrieve various pieces of information from the servers and then store those data points in a database so I could report on the entire environment easily with TSQL. In addition, I could show change history and find approximately (at worst) when an update was installed.

Of all of the things I scripted to start tracking, the one I want to share this time can also be used to audit security on each of the database servers. I use the following script to audit the localadmins on each of the database servers in the enterprise. In order to trap each local admin on the server, I also recurse through domain groups to find all users of a group to find everybody that may have access. Here is a version of the script that is similar to what I use now.

Could I improve on the efficiency of this script? Most definitely I believe there is room for improvement. Remember, I am very novice at my PoSH skills. Scripting issues aside, it works and basically fetches a list of servers from a database, then iterates through each of those servers to fetch the complete list of local admins on each of the servers. Then the script writes out the complete list of admins for each server back to my database so I can generate a history of changes to the admins or report on who has admin access on the server.

For anybody that has admin access to a database server, the permission path (nested group path) is recorded in hierarchical form separated by the carrot character (^). Using this script, I have been able to provide a report to domain admins to clean out various unwanted individuals from access that was not intended or necessary.

Wrapping it Up

TSQL2sDay150x150Automation is an essential tool for every data professional. Wait, no, that’s not accurate. Automation is an essential tool in all facets of IT. Automation is a definitive method to work more efficiently and offload some of the mundane repetitive tasks that consume too much time.

Even if the task is not trivial but needs to be repeated and done so without error, the best tool is automation. Performing tasks over and over naturally leads to higher risk of error. The way to minimize that risk is to perform the task via some automation script or routine.

ArithAbort and SQL Agent Failures

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: January 7, 2019

ARITHABORT

I was coding along one day, working on rolling out some monitoring for a client—monitoring that I had used for previous clients.  The code was pretty straightforward and addressed a concern for the client.  Having used it for several previous clients, I felt pretty confident in the code.  The main purpose was simply to audit index definition changes.  It was rolled into a stored procedure and designed to be deployed to a “DBA” database.

With confidence pretty steady on this code, I deployed the stored procedure along with all of the supporting tables and structures.  Then I proceeded to deploy the SQL Agent job that controls the execution of the stored procedure.  When I deploy a job I like to test the job to ensure it will work.  As luck would have it, the job failed on the first test run.  I instantly became baffled.  Here is the error message.

Job Name DBA – Track Index Changes
Step Name exec proc
Duration 00:00:08
Sql Severity 16
Sql Message ID 1934
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0

Message
Executed as user: sa.

INSERT failed because the following SET options have incorrect settings: ‘ARITHABORT’.
Verify that SET options are correct for use with
indexed views
and/or indexes on computed columns
and/or filtered indexes
and/or query notifications
and/or XML data type methods
and/or spatial index operations.
[SQLSTATE 42000] (Error 1934). The step failed.

Immediately I started looking at my creation scripts for the tables.  Nope, no XML indexes, no spatial indexes, no filtered indexes, no indexes on computed columns (not even any computed columns),  and no query notifications.

Next I started checking the database settings.  Maybe the vendor for the application this client bought had set something for all of the databases regarding ARITHABORT.

Querying sys.databases, we could easily see that ARITHABORT is not enabled at the database level (just like previous implementations).  Well, this is a minor head scratcher at this point.  The code works in other environments, the database setting is the same as other environments.  I guess I could try setting ARITHABORT within the stored procedure and then re-test.

 

When I add the last line, “SET ARITHABORT ON;” to this stored procedure and then rerun the job it runs without any error.  It is a simple fix but the story doesn’t end there.

After making that change, I decided to go another round with the stored procedure and the ARITHABORT setting.  I removed it in the next round and decided to test the stored procedure directly.  Running the stored procedure in Management Studio with or without the ARITHABORT setting produces the same result.  That result is that both work as desired without any error.  For giggles, I ran the job again and discovered that the job still fails.  In the end, it appears to be something that the SQL Agent is setting as a part of its connection back to the database in this case.

In addition to this minor nuisance, you saw that the error outlines several possible causes for failure with regards to ARITHABORT.  One that I found that can be of big concern is with filtered indexes.  Check your connection settings from your application when dealing with any filtered indexes.  Filtered Indexes have produced this error in quite a few cases I have been asked to help fix.  Just a thought for something you should monitor and check should you run into this error or if you are considering the use of filtered indexes.

Putting a Bow on it…

In conclusion, this can be a short termed head scratcher.  Pay close attention to what has changed in the environment.  Test alternatives.  And check those connection strings.

If you are curious, there are a few ways to check connection string settings. One of my favorites is with Extended Events. To read more about Extended Events, I recommend this resource – here. In addition, I recommend checking out some of my other articles showing basics in troubleshooting which can be found here.

«page 1 of 65

Calendar
March 2019
M T W T F S S
« Feb    
 123
45678910
11121314151617
18192021222324
25262728293031

Welcome , today is Saturday, March 23, 2019