Indexed Views – Performance Panacea or Plight

Indexed View(s)

An indexed view is a view where the result set from the query (the view definition) becomes materialized in lieu of the virtual table result set of a standard (non-indexed) view. Many times we see that an indexed view would be created to help improve performance. Far too often, an indexed view is created without consideration for the costs of the indexed view. In this article, I hope to cover some of the more important costs that are frequently overlooked when considering an indexed view as a performance panacea.

The Setup

To demonstrate the overlooked costs of indexed views, I have decided to use the AdventureWorks2014 database (if you don’t have this sample database, you can get your copy here). I will create a view in that database and then I will add a few indexes to that view. Prior to sharing the entire setup, and like all good DBAs, I need to first grab a baseline. For this baseline, I am doing nothing terribly complex. I will just grab table sizes and storage information for the entire database.

Here are my results for the size of the objects within the AdventureWorks2014 database:

These results show that the largest object in the database happens to be the Person.Person table at just about 30MB. Not terribly large, but the database is not terribly large as a whole. Let’s see what happens when I add a materialized view, based on the Person.Person table, to the mix. Here is the definition of that view along with the indexes that I will add to help the performance of some essential production related queries.

After executing all of that code to create this new view with its indexes, I have the following size results:

The creation of this view has chewed up a bunch of storage. It has jumped right up to the number two spot on the biggest objects list within this database. You can see that differences by comparing the highlighted rows to the previous image. The vPerson view is highlighted in red in this second image to help point it out quickly.

Surely this must be a contrived example and people don’t really do this in the real world, right? The answer to that is simply: NO! It DOES happen. I see situations like this all too often. Far too often, large text fields are added to an indexed view to make retrieval faster. I have mimicked that by adding in two XML columns from the Person.Person table. This is definitely overkill because a simple join back to the table based on the BusinessEntityID would get me those two columns. All I have effectively done is duplicated data being stored and I have achieved that at the low low cost of increased storage of 25% for this small database. If you are curious, the column count between the Person.Person table and this new view is 13 columns each.

I call the 25% increase storage cost a significant increase. An increase of 25% in storage for a single materialized view is not an uncommon occurrence. I have seen multi-terabyte databases with 25% of the total data storage being caused by a single view. If you are tight on storage, any unexpected data volume increase can cause a non-linear growth in data requirements to occur just because of the presence of indexed views. Take that into consideration when presented with an option to either performance tune the code or to take a short-cut and create an indexed view.

Band-aid Performance

I alluded to the use of indexed views as a cog to help improve performance. This is done by taking the query that sucks and turning it into a view. The thought is that it will run faster because an index is created specifically for that data set. Generally the performance gains from using an indexed view, to camouflage the bad query, are seen when the query is complex. Unfortunately, the query I provided for this article is not terribly complex. Due to the simplicity of the first example, let’s look first at the execution plan for another query that runs terribly slow but is also not terribly complex:

There are a few table spools and index scans. The most costly operator of the bunch appears to be a merge join with a many-to-many join operation. The indexed view in this case does give me a very significant improvement on this query (look at that estimated cost and then the estimated number of rows). What if I could tune the query a little and avoid the all of storage cost? Let’s see!

Consider the base query for this new view:

Looking at the query, you may be asking yourself why use so many trim functions on each of the joins? This is an extreme that can be easily fixed. None of the fields in the joins should need to be trimmed since they are all numeric in this case. Starting there and removing all of the function use in the joins should provide significant improvements in query speed and without the use of a view. Testing it out would yield similar performance results to the first view in this article. This small change would cause the query to complete in 1 second compared to about 90 seconds. That is a huge gain, and without the added cost of increased storage use.

Do I really see stuff like this in the field? Yes! Spending a little time to fix the root of the problem rather than looking for a quick fix and performance can be spectacular. There are times, though, that an indexed view may be an absolute requirement. That would be perfectly fine. There are times when it is warranted and there is no way around it. If an indexed view is an absolute must, then there are a couple more considerations to be taken into account.

Disappearing Act

I showed previously that the indexed view requires storage space and showed that the new view created quickly claimed its place as the second largest object within the database. Now, to stage the next segment, I will show once again that the indexes are present and consuming disk space. This can be shown via the following query:

And the results of that query should look something like the following (quick reminder that I created a new indexed view for the second demo and these results will be based on that second view):

Now that we know the index is present and consuming space, time to make a change to the view and see what happens. I am proceeding under the premise that I have determined that including the two XML columns in the view are completely unnecessary and are the cause of too much space consumption. I can reap the benefits of the view at a fraction of the cost if I ALTER the view and remove those columns. So I will proceed by issuing the following ALTER statement:

Take note of how long this ALTER command takes to complete – nearly instantaneous.  That rocks! Right? Let’s look at the indexes to make sure they are still in working order and don’t require a defrag. Based on that, I will re-run that query to check the indexes and sizes – for giggles.

This time when the query completes, I get a nice little surprise: I no longer have any indexes on that view. Changing the view definition instantly drops any indexes that were built on that view. If this is done in a production system, imagine the implications and problems that could arise.

Conclusion

In this article, I have covered indexed views and three different considerations that are often overlooked when looking to implement a materialized view. The three considerations for indexed views, as discussed, are: storage, changes to the view, and the band-aid effect. Overlooking the cost of storage can have serious repercussions on the production environment. Implementing an indexed view without attempting to tune the code first could lead to overlooking the storage cost and could end up just being a waste of resources. The last tidbit is that any change to an indexed view will drop the indexes. That is an easily overlooked feature of indexed views. If you forget to recreate the indexes on the view after making changes, you could be looking at a production outage.

There are a couple more critical considerations for indexed views that are often overlooked as well. These additional considerations are blocking, deadlocking, and maintenance. These considerations will be discussed further in a future article.

For more reading enjoyment try the Back to Basics or Xevents series.

Connect To SQL Server – Back to Basics

The first critical task any data professional should ever learn how to do is how to connect to SQL Server. Without a connection to SQL Server, there is barely anything one could do to be productive in the data professional world (for SQL Server).

Yet, despite the commonality of this requirement and ease of the task, I find that there is frequent need to retrain professionals on how to connect to SQL Server. This connection could be attempted from any of the current options but for some reason it still perplexes many.

Let’s look at just how easy it is (should be) to connect to SQL Server (using both SQL Auth and Windows Auth).

Simplicity

First let’s take a look at the dialog box we would see when we try to connect to a server in SQL Server Management Studio (SSMS).

Circled in red we can see the two types of authentication I would like to focus on: “Windows Authentication” and “SQL Server Authentication”. These are both available from the dropdown box called Authentication. The default value here is “Windows Authentication”.

If I choose the default value for authentication or “Windows Authentication”, I only need to click on the connect button at the bottom of that same window. Upon closer inspection, it becomes apparent that the fields “User name:” and “Password:” are disabled and cannot be edited when this default value is selected. This is illustrated in the next image.

Notice that the fields circled in red are greyed out along with their corresponding text boxes. This is normal and is a GOOD thing. Simply click the connect button circled in green and then you will be connected if you have permissions to the specific server in the connection dialog.

Complicating things a touch is the “SQL Server Authentication”. This is where many people get confused. I see many people trying to enter windows credentials in this authentication type. Sure, we are authenticating to SQL Server, but the login used in this method is not a windows account. The account to be used for this type of authentication is strictly the type that is created explicitly inside of SQL Server. This is a type of login where the principal name and the password are both managed by SQL Server.

Let’s take a look at an example here.

Notice here that the text boxes are no longer greyed out and I can type a login and password into the respective boxes. Once done, I then click the “Connect” button and I will be connected (again, presuming I have been granted access and I typed the user name and password correctly).

What if I attempt to type a windows username and password into these boxes?

If I click connect on the preceding image, I will see the following error.

This is by design and to be expected. The authentication methods are different. We should never be able to authenticate to SQL Server when selecting the “SQL Server Authentication” and then providing windows credentials. Windows credentials must be used with the “Windows Authentication” option. If you must run SSMS as a different windows user, then I recommend reading this article.

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Connecting to SQL Server is ridiculously easy. Despite the ease, I find myself helping “Senior” level development and/or database staff.

If you feel the need to read more connection related articles, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Disk Space and SQL Server

Disk Space in SQL Server

One of the frequently required job functions of the database administrator is to track disk space consumption. Whether this requirement comes from management or from a learning opportunity after a production outage, the need exists.

As a hard working DBA, you want to make sure you hit all of the notes to make management sing your praises. Knowing just when the database may fill the drives and prevent a production outage just happens to be one of those sharp notes that could result in a raise and management singing hallelujah. The problem is, how do you do it from within SQL Server? You are just a DBA after all and the disk system is not your domain, right?

Trying to figure it out, you come across a pretty cool function within SQL Server. The name of the function is sys.dm_os_volume_stats. Bonus! This is an excellent discovery, right? Let’s see just how it might work. First a sample query:

If I run that on my local system, I might end up with something that looks like the following:

Looking at the image you may be wondering to yourself right now why I have highlighted a couple of things. You may also be wondering why I used the word “might” in the previous paragraph as well. The reasoning will become more evident as we progress. For now, you have resolved to continue testing the script so execute it again and end up with something that may look like the following (for the same server):

Whoa! What just happened there? Why are there two listings for the C: drive? Why does each register a different value for the FreeSpace column? In addition without any additional usage on the C drive (as verified through other tools) the FreeSpace is changing between executions as well as within the same execution. This is problematic, so you continue testing:

And yet again!

This can’t be correct, can it? Just for giggles let’s modify it just a bit to see if there are any additional clues. Using the following changed script, hopefully a clue will help shed some light on this:

This script yields the following potential results:

Look at the different highlighted areas! There are three different values for FreeSpace for the C: drive in this particular execution. The case of the C: drive plays no role in whether the value is recorded differently or not. This seems to be more of a bug within the dynamic management function. From execution to execution, using this particular method, one could end up with duplicate entries but distinct values. The sort of the execution could be returned differently (though we could fix that).

All of these tests were run on my local machine and I really do only have one C: drive. I should never receive multiple entries back for any drive. If using this particular DMF to track space usage, it could be somewhat problematic if the duplicate drive data pops up. How do we get around it, you ask? Here is another example that I have that has not yet produced this duplication:

Using this version of the script is not terribly more complex, but it will prove to be more reliable. You can see I used some CTEs to provide a little trickery and ensure that I limit my results. What if it is a mount point or a non-standard drive letter? I have not tested that. Let me know how that goes. As you can see, I am restricting the drive selection by using the row_number function against the drive letter.

For alternative reliable methods to find your disk space consumption, I would recommend something different because it is tried and tested. I would recommend using a wmi call to fetch the data. Samples are provided as follows:

Easy peasy, right? Have at it and try tracking your disk space.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here. For other interesting articles about database space – check here or here.

Interview Trick Questions

Today, I am diverging from the more technical posts that I routinely share. Instead, as the title suggests, I want to dive into a something a little more fun.

Anybody that has interviewed for a job has most likely run into the trick question. Some interviewers like to throw out multiple trick questions all in an effort to trip up the candidate and get the candidate to doubt him/her self. Sure, there can be some benefit to throwing out a trick question or four. One such benefit would be to see how the candidate performs under pressure (see them squirm).

The downside to throwing out trick questions, in my opinion, would be that you can turn a serious candidate into an uninterested candidate. So, when throwing out the tricks, tread carefully.

Let’s take a look at an interview trick question candidate. This is a more technical question and is designed to make you think a little bit. Before reading on to see the answer, I implore that you try to answer the question for yourself legitimately.

How can you insert data into two tables using a single statement without the use of triggers, service broker or some other behind-the-scenes feature?

Are you thinking about it?

Do you have your answer yet?

Now that you have your answer, go ahead and continue reading.

Is your answer to this question something along the lines of “You can’t do that and this is just a trick question”?

Well, honestly, it is a bit of a trick question. But I assure you, you can certainly perform an insert into multiple tables from a single statement. Here is one such setup that demonstrates how you can do this:

Do you see how I was able to perform that insert into multiple tables? The trick is in using the OUTPUT clause. This little feature in SQL Server can be of great use for things such as building multiple staging tables during an ETL process.

Here is that little trick again just to highlight it.

Conclusion

There are cases when an interview trick question is suitable. It is when the purported question is truly more technical than trick and is really trying to evaluate your depth and knowledge of SQL Server. The puzzle during the interview boils down to figuring out when it is a trick and when it might not be. Then from there, work your way through possible solutions. But don’t be afraid to admit when you haven’t got a clue. That will be far more impressive than to try and flim-flam the interviewer.

I invite you to share your trick questions in the comments.  Also, how did you solve this particular trick question?

 

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Defaults In msdb Database

Today is a day to discuss defaults. It started with the day being TSQL Tuesday and having a topic of “Say No to Defaults.” You can read more about that from the invite – here. I already participated in the party but did also want to discuss defaults a little bit more. That said, this article is not participating in the blog party. That would seem a bit silly.

While, this post is not a part of the party, the defaults to be discussed are fairly important. I have seen severe consequences due to these defaults being ignored and not changed. So today, in addition to my earlier article (you can read it here), I implore you to make some fundamental changes to your production servers with regards to various defaults.

A Trio of msdb Defaults

There aren’t really that many defaults within the msdb database that must be changed, are there? I mean, seriously, beyond the defaults that are available to every database, what could possibly be unique to this database that could have a severe consequence?

I am so glad you asked!

The defaults in the msdb database are more about what is missing than what is actually there. By default, this database is missing quite a few things that could be deemed critical to your environment.

Let’s start with an easy one – Indexes

There are a few out there that may disagree, but the proof really is in the effect on performance for backup jobs and such. I have three indexes I like to put on every instance. I have seen the implementation of these indexes aid in improved job times as well as aid in reduced time to “clean” up the database.

Easy enough. These indexes are very straight forward and pretty small in the grand scheme of things. But if the index can help improve performance by a factor of 10, then I am in favor of them (and I have seen that performance gain).

Now that we have some supporting indexes to help a bit with performance, we should take a look at the next item. This one can help with job performance as well as help with keeping the msdb database nice and trim.

Data Pruning

I have walked into client instances that had backup history dating all the way back to 2005 and included two-three full backups a day per database with quarter-hourly log backups. Oh and this was for an instance containing well north of 200 databases. Can you say sluggish backups and sluggish msdb overall?

The fix is very easy! Not only do I recommend pruning the backup history, but also the job history, mail history and maintenance plan history (eew – if you use those things). Think about it – do you really need to know that Job XYZ ran successfully in 2006 and only took 15 seconds? This is 2015 and that kind of data is probably not pertinent at this point.

The pruning of this data is not enabled by default! You have to configure this for each of the servers under your purview. Luckily, this is easy to do!

If you use this code sample, be sure to adjust the number of days shown in the retention to match your specific needs.

Now we have addressed a couple of defaults in msdb that can impact your performance. We are tidying up the database and in a much happier state these days. There is one more default, though, that is really critical to your data’s well being. This one is set within the msdb database but it really is for all of your databases!

Configuring Alerts!

I’m not talking about just any alerts. There are some very specific alerts that really should be configured. These are the alerts that can help you intervene to minimize corruption.

If you haven’t faced a problem with corruption – you will. It is only a matter of time. Corruption happens. When it happens, the earlier one can intervene, usually the better the outcome. Every minute counts, so why not try to reduce that time as much as possible?

This one is not terribly difficult to implement. I happen to have a query ready to go for that as well. All that needs to be done is a minor adjustment to the alert email address:

Wrap

Wow! Now there are three quick defaults that must be changed on every server. These defaults will help improve performance as well as help you stay on top of things when they start to go south (corruption). With timely notifications, and better performance, your servers will be happier, healthier, and longer lasting.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Synonyms in SQL Server – Good and Bad

When SQL Server 2005 was released, a nifty little feature was included called synonyms.  Despite being around since SQL Server 2005, I think this feature is often under-utilized or, more importantly, it is implemented in a very bad way.

Today I want to share a couple of examples.  We will take a look at examples of both good and bad implementations of synonyms.

First, let’s take a look at the syntax for creating a synonym.  Per BOL (and for the fun of it, here is the 2005 link).

So a sample implementation of a Synonym could be something like the following.

Before we delve into that sample synonym, lets look at an example of a really bad implementation.

The BAD

While working with a client, I received a request to look into why a linked server query was failing.  (This isn’t necessarily the bad, just hang in there for a bit.)  The message was something like this:

The OLE DB provider “SQLNCLI10” for linked server “blahblahblah” indicates that either the object has no columns or the current user does not have permissions on that object.

The error message seems at least semi-descriptive and gives a starting point.  In this case, I decided to verify the linked server was created properly, verified that the permissions were done properly and even tested the linked server.  On the source (linked) and destination server (let’s call the Source server ServerA and the Destination server we will call ServerB), I verified that permissions were in place for each database to be touched.  Still no dice!

Well, let’s go take a look and see if that referenced table actually exists.  It did not!  Does it exist as a view?  It did not!  Alas, the table existed as a synonym.  This is where it gets wonky.  In looking at the definition of the synonym, I found that the table defined in the synonym had a linked server table as its source.  To top things off, the linked server was back on the originating server that was coming across the link in the first place.  So yes, that would be ServerB initiated a query against ServerA to pull data back to ServerB.  But the data needed (as defined by the vendor) was available on ServerA – supposedly.  Reality had that data actually sitting on ServerB the whole time.

At any rate, thanks to having a synonym for each and every table sitting on ServerA that referenced a table across a linked server on ServerB, we had mass confusion.  In the end, the query was far simpler to execute by just pulling it from the originating query server (ServerB).

This implementation of a synonym was not the best.  All it did was cause confusion, create documentation inaccuracies and delay the developer from accomplishing her task.  Do you really need 1000s of synonyms in your database?  Do you need all of them going across a linked server?  If you do, did you ever think about the potential for performance issues?  (The vendor in this case used those linked servers and synonyms to perform a data conversion that took 36 hrs each time for a rather small dataset – ouch!!!!).

On the other Hand

Imagine, if you will, two databases sitting on the same box.  One database will be for your user data, and the other for “staging” data for processes such as those related to ETL.  Imagine further that, for some inane reason, the naming standard of your databases must include the suffix denoting the environment of the database.

Now picture an SSIS package that must utilize tables from both databases in data sources somehow.  At least one of the tables has millions of records.  And both of the tables have nearly 100 columns.  Even just pulling in the minimum amount of data using a query from each source can cause memory issues.  Two separate data sources means you will likely have a sort transformation (for each source) as well as a join transformation.

Trying to reduce the amount of data in the larger table source could be done via TSQL.  But to reference a database in one environment versus another environment means a code change with each deployment (due to the TSQL – think three part naming).  So you have been hampered by the environment.  Or have you?

By using a synonym in this situation, the data can be joined in a tsql data source by referencing that synonym.  Let’s look back at the sample synonym posted earlier in this article.

You can see that this synonym follows the same sort of naming standards as was just laid out in the preceding scenario.  If I create a synonym in each environment by the same name, and referencing the appropriate environment named database, I have just opened up a performance tweak for my SSIS datasource.

By implementing this slight tweak, I have been able to gain a 10x performance improvement in package performance.  I am now requiring SSIS to ingest fewer records and thus chew up less memory.  Fewer transformations are required and the package can just fly into the required transformations, rather than tinkering around with the transformations needed to just get the data into a usable state for those transformations.

There are other benefits within SSIS to using synonyms for databases on the same server as well.  Especially when dealing with this kind of naming standard that requires the databases to be named differently in each environment.

Conclusion

How you use a synonym can be a huge asset or it can be a significant dampener to performance.  There are benefits and uses for these nifty little things.  Check them out and let us know how you have been able to put synonyms to use to benefit you.

Check out some of these other articles on synonyms here and here.

Maintenance Plan Owner – Back to Basics

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.

 

Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL Server, we can use the following to find that I am now the owner of that maintenance plan we just created.

Notice that in this query, I delve out to the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages tables respective to version of SQL Server. I also am running a dynamic SQL query to support both views dependent on version of SQL Server.  I figured this might be a tad more helpful than the previous version here. This query would yield the following result set for that new “Maintenance Plan” that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

SQL 2008

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Cannot Use the Special Principal – Back to Basics

I recently had a client call me out of the blue because he happened to be getting an error while trying to add a user to a database role. The error he was getting was “Cannot use the special principal ‘dbo’.”

This error has probably cropped up on me more than a few times. And on more than a few occasions, I have forgotten about the previous experiences. Some of that is because the fix is rather easy and after a few times seeing it, muscle memory takes over and you just fix it without thinking about it too much.

Until you get to that muscle memory moment though, you may flounder a bit trying this and that and failing then proceeding on to a level of frustration that has you losing precious hair.

As luck would have it, this is an article about security and principals and is similar in nature to some other articles I wrote about some fundamental misconceptions about permissions here and here.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Meet Prince Apole and Rolle…

Adding a user to the db_datareader database fixed role is a pretty simple task. Most of us can likely do that in our sleep. Even using the GUI is usually pretty reliable to do that. Every now again though, somebody has decided to get tricky on us. Maybe a mistake was made somewhere in a setting on the server and nobody has caught it because nothing was “broken” – until it was.

In the aforementioned case, I was asked to help resolve the issue and I found that there was a problem in how the database owner was set. Not only was it a problem in the current database but in 12 other databases on the same server. The systems admin was at wits end. He was dealing with something that was just not in his knowledge-base yet. I remember being in the same boat – so no big deal there. We talked about some of the things he had tried and how none of it was working. I am going to recreate the same basic scenario along with some of the attempted fixes in this article.

First, we need to create a database (best to break a database designed to be broken instead of an existing one used for something else already).

That is pretty straight forward – the database will be created with the data files in the default directories on your SQL Server instance. In addition, a login called mydomain\svc_dummy will be created as a windows login.

Now let’s try to set the owner of the database and then add the svc_dummy account to the datareader role.

There is a point of interest here. I said I was going to add svc_dummy to the datareader role – not dbo. Well, I set the database owner in the preceding step to svc_dummy so it basically became dbo. When I try to perform the role addition in the GUI and then script the change, this is the script that is produced. I will show why in a few moments.

The execution of the second part of the script results in the following:

Msg 15405, Level 16, State 1, Line 18

Cannot use the special principal ‘dbo’.

That is obviously not going to work. Let’s try fixing the script and add the svc_dummy principal instead of dbo.

Unfortunately, this results in the following:

Msg 15151, Level 16, State 1, Line 22

Cannot add the principal ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

Well, maybe the problem is because the user doesn’t exist then? Let’s try to create the user and see what happens.

Now we should see this message:

Msg 15063, Level 16, State 1, Line 32

The login already has an account under a different user name.

Oy vey. We seem to be going in circles. Nothing is working. The user is there but not really there. Let’s try to drop the user and just try to clean things up and start over.

I hope you see the problem with this one. Trying to drop dbo. I dunno but we should see an error here – and we do get an error.

Msg 15150, Level 16, State 1, Line 27

Cannot drop the user ‘dbo’.

Let’s fix the user then and try to drop the svc_dummy user instead of dbo.

Which in turn generates yet another error.

Msg 15151, Level 16, State 1, Line 52

Cannot drop the user ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

If I can’t resolve the problem by changing the user in the database, maybe I can just blow it out of the water by dropping the server login.

Yet another failure message will ensue. This time the message is:

Msg 15174, Level 16, State 1, Line 55

Login ‘mydomain\svc_dummy’ owns one or more database(s). Change the owner of the database(s) before dropping the login.

So far we have been able to skirt around the problem and generate six different error messages. The last one kind of gives us the best information on what we could do to resolve the issue. The login owns a database and therefore, we need to undo that ownership. Before we do that, let’s take a look at the database principal ‘dbo’.

We already know that svc_dummy is mapped to a user in the DummyDB database. We also know that we cannot add the svc_dummy user because of that prior mapping. We have also learned that when scripting the permissions change from the gui on the svc_dummy login and then generate the script it scripts out the user ‘dbo’. Due to this, let’s look in the sys.database_principals view at the dbo user and see what it tells us.

See how the dbo database principal says it is mapped to a windows account type? With this in mind, let’s join to the sys.server_principals and see what windows account is mapped to the dbo database user.

Now we see a bit more clearly. Combined with the error messages and the principal information for both the login and the user, we have a better view of the puzzle now. Changing the database owner indeed mapped the windows account to dbo for us and is now restricting us to certain activities when trying to manage permissions for the windows login in the database. From here, we can easily fix the issue by changing the database owner, creating a user mapped to the windows login and then adding that principal to the datareader role.

And if we run that script for svc_dummy we will see a successful execution as shown here.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Quickly Change SQL Job Owners

It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.

And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.

The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.

No Expiration Date

There are two basic means to change the owner of every job on your server. Either you open each job one by one and set the owner to an acceptable principal. This method is rather tedious and you will be fighting off the boredom if you have a few hundred jobs on the server. Or, the alternative, change the job owners group by group (set-based theory). This second method can be far less tedious and far more efficient. The second method is by far my preferred method. Let’s take a look at how to make all of these changes in groups.

There are three basic sections to this script. First I fetch what should be changed, then I make the change, and lastly I verify the change. If the change doesn’t look right, then I can rollback the change. If the change is what I expected, then I can commit the change. Those are the broad strokes.

At a more detailed glimpse, I have setup a few variables to compare what I want to change, what the new job owner should be and then I fetch the sid of that new job owner. In my example, I am setting everything to ‘sa’. Why? Because it is easy for the sake of the example in the article – nothing more!

Since sometimes the owner of the job may only have access to the SQL instance via a Domain Group, I also take advantage of a couple of functions to double check that it is the correct account. These functions I am using are SUSER_SID() and SUSER_SNAME().

When all is done as I am expecting, then I should see something similar to the following.

Since the change is what I expect, then at this point I would proceed with the commit transaction statement.

The Wrap

As you can see, making job ownership changes at group scale instead of one by one is pretty easy. This only takes a matter of seconds to run against hundreds of jobs. That same kind of task done one at a time could easily take more than 40 minutes. I am not sure I want to spend that much time on such an innocuous task. I hope you are now able to use what you have learned to improve your skills and become a rock-star DBA. ENJOY!

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Use SSMS with a Different Windows Account – Back to Basics

One of the tasks I find myself doing on a fairly regular basis is running SSMS as a different Windows User. The two biggest use cases for this are: a) to test an account to prove that it is working (or not) and has the appropriate level of access, and b) to use SSMS to connect to a Domain SQL Server from a computer in a different domain (or not on the domain).

In addition to needing to do these tasks for myself, I find that I need to show somebody else how to do the same thing on a fairly consistent basis. Considering the finite keystrokes we all have (which I referenced here), it is time for me to “document” how to do this task.

I will cover two really easy and quick methods to perform this task. One from a command line and the other from the GUI. Both methods will involve a variation of the runas utility.

RUNAS

Let’s start with the easiest of the two methods. In this case, you will need to test windows account (let’s call it a domain account) from a computer which is on the same domain. This requirement allows us to take advantage of the shortcuts from within the GUI to access the runas utility.

To access the runas from Windows, one will first locate the icon for SSMS from the Start Menu, then right click that icon as shown here.

After right clicking the icon, you will see a menu pop up on the screen. Select “Run as different user” from that menu. Once you have selected the appropriate “run as” option, a login prompt will appear as shown here.

Enter the appropriate credentials at the prompt and then SSMS will launch. In this case, I may want to test the account myidomain\domain.useracc. So, I merely need to enter the domain credentials for that account. A caveat here is that the account you are testing will need to have the necessary permissions to “logon” to the workstation in order to launch the app – unlike the second method.

CMD Line

This second method has a few advantages over the GUI method with the biggest advantage being that you can use this method from any machine on the domain or even a machine not joined to the domain (so long as you have the ability to authenticate to the domain). And of course the additional advantage that the account you are testing does not require “logon” permissions on the machine you are using.

Let’s start with the basic command.

I can run that from a command line, or I can throw that into a desktop shortcut (the shortcut method is much more convenient). After I hit “enter” from the command line, I am prompted for a password for the account to be used for that session. Here’s an example of how that would look.

You won’t be able to see the password being typed (don’t fat finger the password 😉 ), but after you enter it successfully and press “enter” then you will see SSMS start to launch. After a successful SSMS launch, you should see something similar to the following:

I have a few things highlighted here of interest. First, in the red box, you will note that the user shown as connected to the server is my “local” test box account instead of the domain account. However, if I verify the authenticated account, I can see that the domain account is indeed accessing the SomeServer SQL Server (as demonstrated by the green box on the right).

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Runas should be a very common tool in the toolbox of all IT professionals – not just Data Professionals. Learning how to test different accounts is essential to being an effective and efficient professional that can provide solid results.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 1 of 4

Calendar
May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Sunday, May 26, 2019