Disk Space and SQL Server

Disk Space in SQL Server

One of the frequently required job functions of the database administrator is to track disk space consumption. Whether this requirement comes from management or from a learning opportunity after a production outage, the need exists.

As a hard working DBA, you want to make sure you hit all of the notes to make management sing your praises. Knowing just when the database may fill the drives and prevent a production outage just happens to be one of those sharp notes that could result in a raise and management singing hallelujah. The problem is, how do you do it from within SQL Server? You are just a DBA after all and the disk system is not your domain, right?

Trying to figure it out, you come across a pretty cool function within SQL Server. The name of the function is sys.dm_os_volume_stats. Bonus! This is an excellent discovery, right? Let’s see just how it might work. First a sample query:

If I run that on my local system, I might end up with something that looks like the following:

Looking at the image you may be wondering to yourself right now why I have highlighted a couple of things. You may also be wondering why I used the word “might” in the previous paragraph as well. The reasoning will become more evident as we progress. For now, you have resolved to continue testing the script so execute it again and end up with something that may look like the following (for the same server):

Whoa! What just happened there? Why are there two listings for the C: drive? Why does each register a different value for the FreeSpace column? In addition without any additional usage on the C drive (as verified through other tools) the FreeSpace is changing between executions as well as within the same execution. This is problematic, so you continue testing:

And yet again!

This can’t be correct, can it? Just for giggles let’s modify it just a bit to see if there are any additional clues. Using the following changed script, hopefully a clue will help shed some light on this:

This script yields the following potential results:

Look at the different highlighted areas! There are three different values for FreeSpace for the C: drive in this particular execution. The case of the C: drive plays no role in whether the value is recorded differently or not. This seems to be more of a bug within the dynamic management function. From execution to execution, using this particular method, one could end up with duplicate entries but distinct values. The sort of the execution could be returned differently (though we could fix that).

All of these tests were run on my local machine and I really do only have one C: drive. I should never receive multiple entries back for any drive. If using this particular DMF to track space usage, it could be somewhat problematic if the duplicate drive data pops up. How do we get around it, you ask? Here is another example that I have that has not yet produced this duplication:

Using this version of the script is not terribly more complex, but it will prove to be more reliable. You can see I used some CTEs to provide a little trickery and ensure that I limit my results. What if it is a mount point or a non-standard drive letter? I have not tested that. Let me know how that goes. As you can see, I am restricting the drive selection by using the row_number function against the drive letter.

For alternative reliable methods to find your disk space consumption, I would recommend something different because it is tried and tested. I would recommend using a wmi call to fetch the data. Samples are provided as follows:

Easy peasy, right? Have at it and try tracking your disk space.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here. For other interesting articles about database space – check here or here.

Interview Trick Questions

Today, I am diverging from the more technical posts that I routinely share. Instead, as the title suggests, I want to dive into a something a little more fun.

Anybody that has interviewed for a job has most likely run into the trick question. Some interviewers like to throw out multiple trick questions all in an effort to trip up the candidate and get the candidate to doubt him/her self. Sure, there can be some benefit to throwing out a trick question or four. One such benefit would be to see how the candidate performs under pressure (see them squirm).

The downside to throwing out trick questions, in my opinion, would be that you can turn a serious candidate into an uninterested candidate. So, when throwing out the tricks, tread carefully.

Let’s take a look at an interview trick question candidate. This is a more technical question and is designed to make you think a little bit. Before reading on to see the answer, I implore that you try to answer the question for yourself legitimately.

How can you insert data into two tables using a single statement without the use of triggers, service broker or some other behind-the-scenes feature?

Are you thinking about it?

Do you have your answer yet?

Now that you have your answer, go ahead and continue reading.

Is your answer to this question something along the lines of “You can’t do that and this is just a trick question”?

Well, honestly, it is a bit of a trick question. But I assure you, you can certainly perform an insert into multiple tables from a single statement. Here is one such setup that demonstrates how you can do this:

Do you see how I was able to perform that insert into multiple tables? The trick is in using the OUTPUT clause. This little feature in SQL Server can be of great use for things such as building multiple staging tables during an ETL process.

Here is that little trick again just to highlight it.

Conclusion

There are cases when an interview trick question is suitable. It is when the purported question is truly more technical than trick and is really trying to evaluate your depth and knowledge of SQL Server. The puzzle during the interview boils down to figuring out when it is a trick and when it might not be. Then from there, work your way through possible solutions. But don’t be afraid to admit when you haven’t got a clue. That will be far more impressive than to try and flim-flam the interviewer.

I invite you to share your trick questions in the comments.  Also, how did you solve this particular trick question?

 

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Defaults In msdb Database

Today is a day to discuss defaults. It started with the day being TSQL Tuesday and having a topic of “Say No to Defaults.” You can read more about that from the invite – here. I already participated in the party but did also want to discuss defaults a little bit more. That said, this article is not participating in the blog party. That would seem a bit silly.

While, this post is not a part of the party, the defaults to be discussed are fairly important. I have seen severe consequences due to these defaults being ignored and not changed. So today, in addition to my earlier article (you can read it here), I implore you to make some fundamental changes to your production servers with regards to various defaults.

A Trio of msdb Defaults

There aren’t really that many defaults within the msdb database that must be changed, are there? I mean, seriously, beyond the defaults that are available to every database, what could possibly be unique to this database that could have a severe consequence?

I am so glad you asked!

The defaults in the msdb database are more about what is missing than what is actually there. By default, this database is missing quite a few things that could be deemed critical to your environment.

Let’s start with an easy one – Indexes

There are a few out there that may disagree, but the proof really is in the effect on performance for backup jobs and such. I have three indexes I like to put on every instance. I have seen the implementation of these indexes aid in improved job times as well as aid in reduced time to “clean” up the database.

Easy enough. These indexes are very straight forward and pretty small in the grand scheme of things. But if the index can help improve performance by a factor of 10, then I am in favor of them (and I have seen that performance gain).

Now that we have some supporting indexes to help a bit with performance, we should take a look at the next item. This one can help with job performance as well as help with keeping the msdb database nice and trim.

Data Pruning

I have walked into client instances that had backup history dating all the way back to 2005 and included two-three full backups a day per database with quarter-hourly log backups. Oh and this was for an instance containing well north of 200 databases. Can you say sluggish backups and sluggish msdb overall?

The fix is very easy! Not only do I recommend pruning the backup history, but also the job history, mail history and maintenance plan history (eew – if you use those things). Think about it – do you really need to know that Job XYZ ran successfully in 2006 and only took 15 seconds? This is 2015 and that kind of data is probably not pertinent at this point.

The pruning of this data is not enabled by default! You have to configure this for each of the servers under your purview. Luckily, this is easy to do!

If you use this code sample, be sure to adjust the number of days shown in the retention to match your specific needs.

Now we have addressed a couple of defaults in msdb that can impact your performance. We are tidying up the database and in a much happier state these days. There is one more default, though, that is really critical to your data’s well being. This one is set within the msdb database but it really is for all of your databases!

Configuring Alerts!

I’m not talking about just any alerts. There are some very specific alerts that really should be configured. These are the alerts that can help you intervene to minimize corruption.

If you haven’t faced a problem with corruption – you will. It is only a matter of time. Corruption happens. When it happens, the earlier one can intervene, usually the better the outcome. Every minute counts, so why not try to reduce that time as much as possible?

This one is not terribly difficult to implement. I happen to have a query ready to go for that as well. All that needs to be done is a minor adjustment to the alert email address:

Wrap

Wow! Now there are three quick defaults that must be changed on every server. These defaults will help improve performance as well as help you stay on top of things when they start to go south (corruption). With timely notifications, and better performance, your servers will be happier, healthier, and longer lasting.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here.

Synonyms in SQL Server – Good and Bad

When SQL Server 2005 was released, a nifty little feature was included called synonyms.  Despite being around since SQL Server 2005, I think this feature is often under-utilized or, more importantly, it is implemented in a very bad way.

Today I want to share a couple of examples.  We will take a look at examples of both good and bad implementations of synonyms.

First, let’s take a look at the syntax for creating a synonym.  Per BOL (and for the fun of it, here is the 2005 link).

So a sample implementation of a Synonym could be something like the following.

Before we delve into that sample synonym, lets look at an example of a really bad implementation.

The BAD

While working with a client, I received a request to look into why a linked server query was failing.  (This isn’t necessarily the bad, just hang in there for a bit.)  The message was something like this:

The OLE DB provider “SQLNCLI10” for linked server “blahblahblah” indicates that either the object has no columns or the current user does not have permissions on that object.

The error message seems at least semi-descriptive and gives a starting point.  In this case, I decided to verify the linked server was created properly, verified that the permissions were done properly and even tested the linked server.  On the source (linked) and destination server (let’s call the Source server ServerA and the Destination server we will call ServerB), I verified that permissions were in place for each database to be touched.  Still no dice!

Well, let’s go take a look and see if that referenced table actually exists.  It did not!  Does it exist as a view?  It did not!  Alas, the table existed as a synonym.  This is where it gets wonky.  In looking at the definition of the synonym, I found that the table defined in the synonym had a linked server table as its source.  To top things off, the linked server was back on the originating server that was coming across the link in the first place.  So yes, that would be ServerB initiated a query against ServerA to pull data back to ServerB.  But the data needed (as defined by the vendor) was available on ServerA – supposedly.  Reality had that data actually sitting on ServerB the whole time.

At any rate, thanks to having a synonym for each and every table sitting on ServerA that referenced a table across a linked server on ServerB, we had mass confusion.  In the end, the query was far simpler to execute by just pulling it from the originating query server (ServerB).

This implementation of a synonym was not the best.  All it did was cause confusion, create documentation inaccuracies and delay the developer from accomplishing her task.  Do you really need 1000s of synonyms in your database?  Do you need all of them going across a linked server?  If you do, did you ever think about the potential for performance issues?  (The vendor in this case used those linked servers and synonyms to perform a data conversion that took 36 hrs each time for a rather small dataset – ouch!!!!).

On the other Hand

Imagine, if you will, two databases sitting on the same box.  One database will be for your user data, and the other for “staging” data for processes such as those related to ETL.  Imagine further that, for some inane reason, the naming standard of your databases must include the suffix denoting the environment of the database.

Now picture an SSIS package that must utilize tables from both databases in data sources somehow.  At least one of the tables has millions of records.  And both of the tables have nearly 100 columns.  Even just pulling in the minimum amount of data using a query from each source can cause memory issues.  Two separate data sources means you will likely have a sort transformation (for each source) as well as a join transformation.

Trying to reduce the amount of data in the larger table source could be done via TSQL.  But to reference a database in one environment versus another environment means a code change with each deployment (due to the TSQL – think three part naming).  So you have been hampered by the environment.  Or have you?

By using a synonym in this situation, the data can be joined in a tsql data source by referencing that synonym.  Let’s look back at the sample synonym posted earlier in this article.

You can see that this synonym follows the same sort of naming standards as was just laid out in the preceding scenario.  If I create a synonym in each environment by the same name, and referencing the appropriate environment named database, I have just opened up a performance tweak for my SSIS datasource.

By implementing this slight tweak, I have been able to gain a 10x performance improvement in package performance.  I am now requiring SSIS to ingest fewer records and thus chew up less memory.  Fewer transformations are required and the package can just fly into the required transformations, rather than tinkering around with the transformations needed to just get the data into a usable state for those transformations.

There are other benefits within SSIS to using synonyms for databases on the same server as well.  Especially when dealing with this kind of naming standard that requires the databases to be named differently in each environment.

Conclusion

How you use a synonym can be a huge asset or it can be a significant dampener to performance.  There are benefits and uses for these nifty little things.  Check them out and let us know how you have been able to put synonyms to use to benefit you.

Check out some of these other articles on synonyms here and here.

Maintenance Plan Owner – Back to Basics

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.

 

Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL Server, we can use the following to find that I am now the owner of that maintenance plan we just created.

Notice that in this query, I delve out to the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages tables respective to version of SQL Server. I also am running a dynamic SQL query to support both views dependent on version of SQL Server.  I figured this might be a tad more helpful than the previous version here. This query would yield the following result set for that new “Maintenance Plan” that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

SQL 2008

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Cannot Use the Special Principal – Back to Basics

I recently had a client call me out of the blue because he happened to be getting an error while trying to add a user to a database role. The error he was getting was “Cannot use the special principal ‘dbo’.”

This error has probably cropped up on me more than a few times. And on more than a few occasions, I have forgotten about the previous experiences. Some of that is because the fix is rather easy and after a few times seeing it, muscle memory takes over and you just fix it without thinking about it too much.

Until you get to that muscle memory moment though, you may flounder a bit trying this and that and failing then proceeding on to a level of frustration that has you losing precious hair.

As luck would have it, this is an article about security and principals and is similar in nature to some other articles I wrote about some fundamental misconceptions about permissions here and here.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Meet Prince Apole and Rolle…

Adding a user to the db_datareader database fixed role is a pretty simple task. Most of us can likely do that in our sleep. Even using the GUI is usually pretty reliable to do that. Every now again though, somebody has decided to get tricky on us. Maybe a mistake was made somewhere in a setting on the server and nobody has caught it because nothing was “broken” – until it was.

In the aforementioned case, I was asked to help resolve the issue and I found that there was a problem in how the database owner was set. Not only was it a problem in the current database but in 12 other databases on the same server. The systems admin was at wits end. He was dealing with something that was just not in his knowledge-base yet. I remember being in the same boat – so no big deal there. We talked about some of the things he had tried and how none of it was working. I am going to recreate the same basic scenario along with some of the attempted fixes in this article.

First, we need to create a database (best to break a database designed to be broken instead of an existing one used for something else already).

That is pretty straight forward – the database will be created with the data files in the default directories on your SQL Server instance. In addition, a login called mydomain\svc_dummy will be created as a windows login.

Now let’s try to set the owner of the database and then add the svc_dummy account to the datareader role.

There is a point of interest here. I said I was going to add svc_dummy to the datareader role – not dbo. Well, I set the database owner in the preceding step to svc_dummy so it basically became dbo. When I try to perform the role addition in the GUI and then script the change, this is the script that is produced. I will show why in a few moments.

The execution of the second part of the script results in the following:

Msg 15405, Level 16, State 1, Line 18

Cannot use the special principal ‘dbo’.

That is obviously not going to work. Let’s try fixing the script and add the svc_dummy principal instead of dbo.

Unfortunately, this results in the following:

Msg 15151, Level 16, State 1, Line 22

Cannot add the principal ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

Well, maybe the problem is because the user doesn’t exist then? Let’s try to create the user and see what happens.

Now we should see this message:

Msg 15063, Level 16, State 1, Line 32

The login already has an account under a different user name.

Oy vey. We seem to be going in circles. Nothing is working. The user is there but not really there. Let’s try to drop the user and just try to clean things up and start over.

I hope you see the problem with this one. Trying to drop dbo. I dunno but we should see an error here – and we do get an error.

Msg 15150, Level 16, State 1, Line 27

Cannot drop the user ‘dbo’.

Let’s fix the user then and try to drop the svc_dummy user instead of dbo.

Which in turn generates yet another error.

Msg 15151, Level 16, State 1, Line 52

Cannot drop the user ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

If I can’t resolve the problem by changing the user in the database, maybe I can just blow it out of the water by dropping the server login.

Yet another failure message will ensue. This time the message is:

Msg 15174, Level 16, State 1, Line 55

Login ‘mydomain\svc_dummy’ owns one or more database(s). Change the owner of the database(s) before dropping the login.

So far we have been able to skirt around the problem and generate six different error messages. The last one kind of gives us the best information on what we could do to resolve the issue. The login owns a database and therefore, we need to undo that ownership. Before we do that, let’s take a look at the database principal ‘dbo’.

We already know that svc_dummy is mapped to a user in the DummyDB database. We also know that we cannot add the svc_dummy user because of that prior mapping. We have also learned that when scripting the permissions change from the gui on the svc_dummy login and then generate the script it scripts out the user ‘dbo’. Due to this, let’s look in the sys.database_principals view at the dbo user and see what it tells us.

See how the dbo database principal says it is mapped to a windows account type? With this in mind, let’s join to the sys.server_principals and see what windows account is mapped to the dbo database user.

Now we see a bit more clearly. Combined with the error messages and the principal information for both the login and the user, we have a better view of the puzzle now. Changing the database owner indeed mapped the windows account to dbo for us and is now restricting us to certain activities when trying to manage permissions for the windows login in the database. From here, we can easily fix the issue by changing the database owner, creating a user mapped to the windows login and then adding that principal to the datareader role.

And if we run that script for svc_dummy we will see a successful execution as shown here.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Quickly Change SQL Job Owners

It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.

And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.

The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.

No Expiration Date

There are two basic means to change the owner of every job on your server. Either you open each job one by one and set the owner to an acceptable principal. This method is rather tedious and you will be fighting off the boredom if you have a few hundred jobs on the server. Or, the alternative, change the job owners group by group (set-based theory). This second method can be far less tedious and far more efficient. The second method is by far my preferred method. Let’s take a look at how to make all of these changes in groups.

There are three basic sections to this script. First I fetch what should be changed, then I make the change, and lastly I verify the change. If the change doesn’t look right, then I can rollback the change. If the change is what I expected, then I can commit the change. Those are the broad strokes.

At a more detailed glimpse, I have setup a few variables to compare what I want to change, what the new job owner should be and then I fetch the sid of that new job owner. In my example, I am setting everything to ‘sa’. Why? Because it is easy for the sake of the example in the article – nothing more!

Since sometimes the owner of the job may only have access to the SQL instance via a Domain Group, I also take advantage of a couple of functions to double check that it is the correct account. These functions I am using are SUSER_SID() and SUSER_SNAME().

When all is done as I am expecting, then I should see something similar to the following.

Since the change is what I expect, then at this point I would proceed with the commit transaction statement.

The Wrap

As you can see, making job ownership changes at group scale instead of one by one is pretty easy. This only takes a matter of seconds to run against hundreds of jobs. That same kind of task done one at a time could easily take more than 40 minutes. I am not sure I want to spend that much time on such an innocuous task. I hope you are now able to use what you have learned to improve your skills and become a rock-star DBA. ENJOY!

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Use SSMS with a Different Windows Account – Back to Basics

One of the tasks I find myself doing on a fairly regular basis is running SSMS as a different Windows User. The two biggest use cases for this are: a) to test an account to prove that it is working (or not) and has the appropriate level of access, and b) to use SSMS to connect to a Domain SQL Server from a computer in a different domain (or not on the domain).

In addition to needing to do these tasks for myself, I find that I need to show somebody else how to do the same thing on a fairly consistent basis. Considering the finite keystrokes we all have (which I referenced here), it is time for me to “document” how to do this task.

I will cover two really easy and quick methods to perform this task. One from a command line and the other from the GUI. Both methods will involve a variation of the runas utility.

RUNAS

Let’s start with the easiest of the two methods. In this case, you will need to test windows account (let’s call it a domain account) from a computer which is on the same domain. This requirement allows us to take advantage of the shortcuts from within the GUI to access the runas utility.

To access the runas from Windows, one will first locate the icon for SSMS from the Start Menu, then right click that icon as shown here.

After right clicking the icon, you will see a menu pop up on the screen. Select “Run as different user” from that menu. Once you have selected the appropriate “run as” option, a login prompt will appear as shown here.

Enter the appropriate credentials at the prompt and then SSMS will launch. In this case, I may want to test the account myidomain\domain.useracc. So, I merely need to enter the domain credentials for that account. A caveat here is that the account you are testing will need to have the necessary permissions to “logon” to the workstation in order to launch the app – unlike the second method.

CMD Line

This second method has a few advantages over the GUI method with the biggest advantage being that you can use this method from any machine on the domain or even a machine not joined to the domain (so long as you have the ability to authenticate to the domain). And of course the additional advantage that the account you are testing does not require “logon” permissions on the machine you are using.

Let’s start with the basic command.

I can run that from a command line, or I can throw that into a desktop shortcut (the shortcut method is much more convenient). After I hit “enter” from the command line, I am prompted for a password for the account to be used for that session. Here’s an example of how that would look.

You won’t be able to see the password being typed (don’t fat finger the password 😉 ), but after you enter it successfully and press “enter” then you will see SSMS start to launch. After a successful SSMS launch, you should see something similar to the following:

I have a few things highlighted here of interest. First, in the red box, you will note that the user shown as connected to the server is my “local” test box account instead of the domain account. However, if I verify the authenticated account, I can see that the domain account is indeed accessing the SomeServer SQL Server (as demonstrated by the green box on the right).

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Runas should be a very common tool in the toolbox of all IT professionals – not just Data Professionals. Learning how to test different accounts is essential to being an effective and efficient professional that can provide solid results.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Single User Mode – Back to Basics

In a recent article, I took you on a trip through how to hack (ethically) a SQL Server to regain sysadmin access. In that article, I made quick mention of restarting SQL Server into single-user mode. It only makes sense to show quickly how to get into single-user mode.

Before getting into that, I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Single-User

So, what exactly is this single-user mode thing? Single-user mode is basically the official back-door into SQL Server for various reasons such as:

  • Somebody deleted all of the logins that were in the sysadmin role.
  • The sa account is disabled or the password has been forgotten.
  • Somebody deleted any Windows groups that were members of the sysadmin role.
  • All members of the sysadmin role are no longer with the company.
  • You need to restore the master database
  • You want to keep SQL Server all to yourself because you are greedy!

These are some pretty solid reasons to need to be able to use the back door. But how exactly do we get to the back door?

Two Paths

As luck would have it, there are two ways to enable single-user mode. You can either get there by making some changes for the SQL Server service in Configuration Manager, or you can utilize a command prompt. I won’t cover the gui path beyond the gentle reminder that you must remember to undo your change when using that method.

My preferred method is through the command line. Using my SQL Server 2017 as the experiment, I would navigate to the Binn directory for that instance. In this case, as shown in the next image.

Before getting too far ahead of myself, I am going to stop my SQL Server.

Notice, I also queried to find all of my services related to SQL before stopping the MSSQLServer service via the net stop mssqlserver command. We will come back to some net start and net stop commands later.

With the service successfully stopped, I can now restart the service in single-user mode.

And then the validation that we are indeed starting in single-user mode…

But wait, did you notice that bit of trickery on the startup command?

This is a pro-tip for when you must use single-user mode. Inevitably, somebody will steal the single-user connection and you will be locked out of the session. By using an app name after the single-user switch, you are telling SQL Server to only accept connections for that specific application. Since most apps will not be using sqlcmd, you will have far less contention to gain that connection and you will be able to complete your task much easier.

You could also pass something like this instead…

In this case, I would be limiting the connections to a query from SSMS (and not object explorer).

Now that I have a single-user connection, I can add a sysadmin or restore the master database or just sit on it and play devious. It all depends on what your objective for the single-user session happens to be.

More Command Line

Remember that reference to the NET commands? Well, it turns out we can also start SQL Server in single-user via net start. Let’s check it out.

The command is pretty simple:

The effect here is the same as navigating to the Binn directory and starting SQL Server with the sqlservr.exe executable. The big difference is considerably less typing and less verbose output of the service startup.

When using the net start method, you do need to know the service name of the SQL Server instance. To get that, I do recommend the following powershell script.

This will produce results similar to the following.

From the results, I can pick the SQL Server service and then pass that to the net start command fairly easily.

The Wrap

Starting SQL Server in single-user mode should be a tool every data professional holds in the bag. This is an essential tool that can be used in multiple scenarios and ensure you are able to fully maintain and control your server. I have shown how to get to single-user mode via two command line methods and mentioned a GUI method. The nice thing about the command line methods is that you don’t have to remember to undo the startup switch like you do with the GUI method.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Lost that SQL Server Access?

As a data professional can you recall the last time you needed to support a SQL Server instance for which you had no access? What if you used to have access and then that access magically disappeared?

I know I run into this dilemma more than I would probably like to. It is rather annoying to be under a crunch to rapidly provide support only to discover you are stuck and have to wait on somebody else who hopefully has access.

It’s one thing to not have access in the first place. This is usually an easy fix in most cases. The really unpleasant access issue is the one when you have confirmed prior access to the instance and then to be completely locked out. More succinctly, you have lost that SQL access!

Whoa is Me!

All hope is now lost right? OK, that isn’t entirely true. Or is it? What if everybody else from the team is also locked out and there is no known sysadmin account. In essence everybody is locked out from managing the instance and now you have a real crisis, right? Well, not so fast. You can still get back in to the instance with sysadmin access. It should be no real secret that you could always restart the SQL instance in single-user mode. Then again, that probably means bigger problems if the server is a production server and is still servicing application requests just fine.

What to do? What to do?

Restart Prohibited

If you really cannot cause a service disruption to bounce the server into single-user mode, my friend Argenis Fernandez (b | t) has this pretty nifty trick that could help you. Truth be told, I have tested that method (even on SQLExpress) several times and it is a real gem. Is this the only alternative?

Let’s back it up just a step or two first. Not having access to SQL Server is in no way the same thing as not having access to the server. Many sysadmins have access to the windows server. Many DBAs also have access to the Windows server or can at least work with the sysadmins to get access to the Windows server in cases like this. If you have admin access to windows – then not much is really going to stop you from gaining access to SQL on that same box. It is a matter of how you approach the issue. Even to restart SQL Server in single-user mode, you need to have access to the Windows server. So, please keep that in mind as you read the article by Argenis as well as the following.

Beyond the requirement of having local access to the server, one of the things that may cause heartburn for some is the method of editing the registry as suggested by Argenis. Modifying the registry (in this case) is not actually terribly complex but it is another one of those changes  that must be put back the way it was. What if there was another way?

As luck would have it, there is an alternative (else there wouldn’t be this article). It just so happens, this alternative is slightly less involved (in my opinion). Let’s start with a server where I don’t have SQL access (beyond public) but I do have Windows access.

We can see on this SQLExpress instance on the TF server that my “Jason” does not exist. Since I don’t have access, I can’t add my own account either. Time to fix that. In order to fix it, I am going to create Scheduled task in Windows that will run a SQLCMD script from my C:\Database folder. The folder can be anywhere, but I generally have one with scripts and such somewhere on each server that I can quickly access.

From here, you will want to click on the “Change User or Group” button to change it to an account that does have access to SQL Server. The account that I use is not a “user” account but rather it is a “system” account called “NT AUTHORITY\SYSTEM” that is present all the way through SQL Server 2017.

To locate the “NT AUTHORITY\SYSTEM” account, just type “SYSTEM” into the new window and click “Check Names”. The account will resolve and then you can click OK out of the “Select User or Group” window.

With the account selected that will run this task, we can now focus our attention on the guts of the task. We will now go to the “Actions” tab.

Click the new button, and here we will configure what will be done.

I do recommend putting the full path to SQLCMD into “Program/Script” box. Once entered, you will add the following to the parameter box.

If you do not have an instance, then just the server name will suffice after the -S parameter. The -i parameter specifies the path to the SQL script file that will be created and placed in the C:\database directory (or whichever directory you have chosen).

That is it for the setup of the task. Now let’s look at the guts of the script file.

Save that into a script document named myscript.sql in the aforementioned directory and then execute the windows task. After executing the Windows task, it is time to verify if it worked or not.

Boom! From no access to a sysadmin in a matter of seconds. Here is that quick verify script – generalized.

The Wrap

Losing access to a SQL instance is never a desirable situation – for the DBA. When the people that are supposed to have access, lose that access, all hope is not lost. There are plenty of methods available to regain the requisite access to manage the server. Today, I shared one such method that I view as being extremely easy. If you lose access, I would recommend taking the steps shown in this article to regain that access.

While not in the back to basics series, I do recommend checking out my other posts in that series. Some topics in the series include (but are not limited to): Backups, backup history and user logins. I would also recommend reading this audit article. If you are able to elevate your permissions, then obviously anybody with server access can elevate their permissions too. For that reason, you should regularly audit the permissions and principals in SQL Server.

«page 1 of 3

Calendar
March 2019
M T W T F S S
« Feb    
 123
45678910
11121314151617
18192021222324
25262728293031

Welcome , today is Saturday, March 23, 2019