Synonyms in SQL Server – Good and Bad

When SQL Server 2005 was released, a nifty little feature was included called synonyms.  Despite being around since SQL Server 2005, I think this feature is often under-utilized or, more importantly, it is implemented in a very bad way.

Today I want to share a couple of examples.  We will take a look at examples of both good and bad implementations of synonyms.

First, let’s take a look at the syntax for creating a synonym.  Per BOL (and for the fun of it, here is the 2005 link).

So a sample implementation of a Synonym could be something like the following.

Before we delve into that sample synonym, lets look at an example of a really bad implementation.

The BAD

While working with a client, I received a request to look into why a linked server query was failing.  (This isn’t necessarily the bad, just hang in there for a bit.)  The message was something like this:

The OLE DB provider “SQLNCLI10” for linked server “blahblahblah” indicates that either the object has no columns or the current user does not have permissions on that object.

The error message seems at least semi-descriptive and gives a starting point.  In this case, I decided to verify the linked server was created properly, verified that the permissions were done properly and even tested the linked server.  On the source (linked) and destination server (let’s call the Source server ServerA and the Destination server we will call ServerB), I verified that permissions were in place for each database to be touched.  Still no dice!

Well, let’s go take a look and see if that referenced table actually exists.  It did not!  Does it exist as a view?  It did not!  Alas, the table existed as a synonym.  This is where it gets wonky.  In looking at the definition of the synonym, I found that the table defined in the synonym had a linked server table as its source.  To top things off, the linked server was back on the originating server that was coming across the link in the first place.  So yes, that would be ServerB initiated a query against ServerA to pull data back to ServerB.  But the data needed (as defined by the vendor) was available on ServerA – supposedly.  Reality had that data actually sitting on ServerB the whole time.

At any rate, thanks to having a synonym for each and every table sitting on ServerA that referenced a table across a linked server on ServerB, we had mass confusion.  In the end, the query was far simpler to execute by just pulling it from the originating query server (ServerB).

This implementation of a synonym was not the best.  All it did was cause confusion, create documentation inaccuracies and delay the developer from accomplishing her task.  Do you really need 1000s of synonyms in your database?  Do you need all of them going across a linked server?  If you do, did you ever think about the potential for performance issues?  (The vendor in this case used those linked servers and synonyms to perform a data conversion that took 36 hrs each time for a rather small dataset – ouch!!!!).

On the other Hand

Imagine, if you will, two databases sitting on the same box.  One database will be for your user data, and the other for “staging” data for processes such as those related to ETL.  Imagine further that, for some inane reason, the naming standard of your databases must include the suffix denoting the environment of the database.

Now picture an SSIS package that must utilize tables from both databases in data sources somehow.  At least one of the tables has millions of records.  And both of the tables have nearly 100 columns.  Even just pulling in the minimum amount of data using a query from each source can cause memory issues.  Two separate data sources means you will likely have a sort transformation (for each source) as well as a join transformation.

Trying to reduce the amount of data in the larger table source could be done via TSQL.  But to reference a database in one environment versus another environment means a code change with each deployment (due to the TSQL – think three part naming).  So you have been hampered by the environment.  Or have you?

By using a synonym in this situation, the data can be joined in a tsql data source by referencing that synonym.  Let’s look back at the sample synonym posted earlier in this article.

You can see that this synonym follows the same sort of naming standards as was just laid out in the preceding scenario.  If I create a synonym in each environment by the same name, and referencing the appropriate environment named database, I have just opened up a performance tweak for my SSIS datasource.

By implementing this slight tweak, I have been able to gain a 10x performance improvement in package performance.  I am now requiring SSIS to ingest fewer records and thus chew up less memory.  Fewer transformations are required and the package can just fly into the required transformations, rather than tinkering around with the transformations needed to just get the data into a usable state for those transformations.

There are other benefits within SSIS to using synonyms for databases on the same server as well.  Especially when dealing with this kind of naming standard that requires the databases to be named differently in each environment.

Conclusion

How you use a synonym can be a huge asset or it can be a significant dampener to performance.  There are benefits and uses for these nifty little things.  Check them out and let us know how you have been able to put synonyms to use to benefit you.

Check out some of these other articles on synonyms here and here.

Maintenance Plan Owner – Back to Basics

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.

 

Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL Server, we can use the following to find that I am now the owner of that maintenance plan we just created.

Notice that in this query, I delve out to the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages tables respective to version of SQL Server. I also am running a dynamic SQL query to support both views dependent on version of SQL Server.  I figured this might be a tad more helpful than the previous version here. This query would yield the following result set for that new “Maintenance Plan” that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

SQL 2008

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Ghosts in your Database

Yes Virginia, there are ghosts in your database.  More specifically, there are ghosts in your SQL Server database.  They are not there to haunt you.  They are not there just for this holiday season (speaking of Halloween Month).

How can there be ghosts in the database?

Why would there be ghosts in the database?

Do they happen because somebody issued a KILL statement?

Let’s address each of those in turn.   A database ghost record is (in a very basic form) one that’s just been deleted in an index on a table . Delete operations don’t actually physically remove records from pages – they only mark them as having been deleted (ghosted). Now why is it done this way?  The answer here is largely performance based.  This is a performance optimization that allows delete operations to complete more quickly. Additionally, it allows the rollback of delete operations to process more quickly.  The rollback processes faster because all that needs to happen is to “flip the flag” for the records as being deleted/ghosted, instead of having to reinsert the deleted records.  That may be a bit over-generalized, but I hope you get the gist.  In short, records are marked as “ghosted” when a delete operation is performed; and to rollback, you simply undo that mark.

Now, what about this KILL statement thing?  The kill statement is pure Halloween fun and does not create ghost records.

Ghost Hunting

Now that we have established the purpose of Ghosts in the database, how do you verify the existence of Ghosts?  In other words, what can we do to prove there really are spectral things in the database?  This is where the fun really begins.  First, we need to get out the equipment and tools (as any good ghost hunter would do) so we can capture these phantasms.  Let’s call the first tool the “trap”.  Here is what you will need for it.

This trap, err database, can be a bit large.  As currently configured, we will need about 16GB of disk space to support it.  If that is too much, I recommend removing the last column – “TheBlob”.  As you can see, we are setting a rather large trap.  The table we create (Halloween.Ghosts) will receive One Million records.  This is most probably overkill to catch these ghosts, so you can also cut back on the number of records to be affected.

Now, to make sure we have some data and that we can use the table, let’s just run a little test query.

Excellent, we have a good sample of data.

database ghost records

At this point, it is important to note that we have done nothing that will cause database ghost records.  All that has been done is to set the framework so we can see the ghosts.  With the framework in place, let’s try to catch some ghosts.  To do so, we need to try to delete something.  Since we just happen to have had a clerical error in our database, we have 666 prime candidates to try and fix.  We happen to have several records that were supposed to be given a Slimer date of Halloween.  The clerk, being absent minded, thought that Halloween was supposed to be on Oct. 30.  Our business model dictates that the invalid records must be deleted first and then we can try to enter the replacement records.  So, let’s go ahead and try to remove those records.

Before we remove the records though, we need to discuss one important requirement for us to be able to see the ghosts.  Let’s call it spectral vision goggles.  In the database realm, we call it a trace flag.  In order to see the the ghosts on the pages, we need to enable TF 661.  We can do that with the following statement.  There is a serious side effect to this method too – it alters the behavior of the Ecto Containment Unit or automatic ghost cleanup process.  If you enable this, you will need to disable it later and/or manually run a ghost cleanup.

Now that we have the last piece of equipment in place, let’s go ahead and try to delete some records.

With all of those records deleted (all 666 of them), let’s see what we might have captured.  First, let’s take a look at some index stats.

If we look at the output of this query, we will see that we did indeed attempt to delete 666 records.  Those records will now display in the ghost_record_count column.  We will also see that, since we had two indexes on the table, there are 666 ghost records marked on each index.

idxstats_ghostcount

Very cool!  We are definitely on the track to capturing those ghosts.  We have a trail that they exist in the database.  Let’s keep going and see where we can see them.  You should note that there is an additional column in our result set that looks like it might be related to ghost records.  We are going to leave the discovery of version_ghost_record_count as a homework experiment for you to perform.  It is beyond the current scope of this article.

Now this is getting exciting.  We have stronger evidence in the log showing that these ghosts are hanging around in the database.  Not only are they hanging around in the database, we can see which pages in the database on which they are trying to hide.

dblog_output

This is really solid information!  fn_dblog is giving us just about everything we need in order to get those ghosts.  It took a little bit of work since the log reports the page number in hex.  Converting that to an integer page number is essential for us to look at the page (besides integer values are easier to interpret for most people).  Now I can take that PageID and pass that number, for any of the records reported by fn_dblog, and pass it into yet another undocumented procedure known as DBCC Page.

When looking to use DBCC page, we can either look at the PFS Page and see more pages that have ghost record counts.  Or we can take the results seen from the fn_dblog output  and then look at the contents of the page and catch those ghosts.  We will take a quick look at the PFS page first.  Then we will take a look at an index page next.  In this database that we have created, the PFS page will show several other pages that have ghost records on them.  Due to the size (over 2 million pages), we only see index pages with ghost records in that result.  If our database were smaller, we would quite possibly see data pages in our first PFS page of the database.  Let’s see a sample from the first PFS in this database.

ghost_displayedonpageduetotf

We can follow that link from this point to page 126.  Page 126 happens to be an index page similar to the following.  There are a couple of indicators that this is an index page.  First being that when we run DBCC Page with a format of 3, we will see two result sets.  The second result set will show statistics and index information.  The second being in the image attached after the query.  We will leave it as an exercise to you to see other ways to demonstrate that this is an index page.

ghost_indexpage

That is great, but we have more ghosts to find.  Let’s look at a ghost on a data page.  Randomly picking a PageID from that list that was output from fn_dblog, let’s see what DBCC Page will provide to us.

ghstcntondatapage

Conclusion

Well, isn’t that just cool!  We have trapped a bunch of ghosts and were even able to see them.  This has been a fantastic deep dive into the crypts of the database.  This is merely a scratch on the surface though.  We hope this will encourage you to explore a bit and at least try one of the homework assignments we left behind in this article.

With all of that, we have a bit of cleanup to do.  The cleanup comes in one of two methods.  Method one involves manual labor.  Method two involves our friendly little trace flag we already used.  Since most DBAs prefer the automated mechanisms over manual, let’s just discuss method two for now.  It is extremely effortless.

That will put the system back to the way it was when we started (and of course we trust that nobody did this on their prod box).

This has been one of a few articles about ghosts in the database. You can check out some of the others here and here.

Refresh SQL Modules

As is true in most facets of life, things tend to get stale and old. Sometimes this staleness can be visibly represented as the wrinkles on your face. Other times, as with SQL Server, it may mean that a stored procedure or view stops working.

Unfortunately, when it comes to SQL Server, the symptoms are not as readily visible as aging lines would be. In SQL Server, the symptoms may be as random and difficult to recognize as a stored procedure just suddenly stops working.

What makes this even more difficult is that the stored procedure (that is no longer working) may appear to be entirely unchanged. These failures can occur when other changes have occurred to the system as well as when no change has occurred. Imagine the joys you could experience while trying to troubleshoot this kind of problem.

If you were keen, you would probably implement any number of Extended Event Sessions to help troubleshoot the issue. If you are unfamiliar, you might want a refresher course on how to setup an XE session which you could read from one of these articles, here and here.

If you are curious, there are more XE articles on my blog – here.

Metadata

There are many possible causes for the metadata to become fouled up inside the SQL modules on the database server. Invariably, according to the interested parties, nothing has changed in months! Right or wrong, there is still a problem to fix. Fortunately, the problem is easy enough to fix.

Sure, all of the modules could be re-deployed from source control. But, given that the modules don’t to appear to have lost any of the code within them, re-deploying code seems to be a bit of overkill. We don’t need to go that far. We can simply run sp_refreshsqlmodule for the stored modules (procs, triggers, functions, views etc). This would be pretty simple with the following script:

In this script, I am just going to refresh all modules in the database (Adventureworks2014 in this case). I could modify the script to trim it down to a specific module or set of modules. Instead, I leave that as homework for you.

That said, I do have a slightly different alternative that looks for any tables changed on a specific date. After finding those changed tables, then I update all modules related to the changed table(s). Here is how that would look:

From here, suppose you want to check for a range of dates where tables were modified. Again, I will leave that as homework for you.

After executing these scripts, I will have an output similar to the following:

This leaves you with one more step to perform – copy the values from the RefreshStmt column to a new query window and execute the statements.

The Wrap

In this article I showed a very simple solution to a problem that plagues some environments – out of date metadata. Forcing a module refresh will often resolve these types of issues.

With such a beautifully simple solution, I could have grouped this article into my “Back to Basics” series, but I did not. That said, there are some pretty interesting articles in the series including (but not limited to): Backups, backup history and user logins. Check them out!

Cannot Use the Special Principal – Back to Basics

I recently had a client call me out of the blue because he happened to be getting an error while trying to add a user to a database role. The error he was getting was “Cannot use the special principal ‘dbo’.”

This error has probably cropped up on me more than a few times. And on more than a few occasions, I have forgotten about the previous experiences. Some of that is because the fix is rather easy and after a few times seeing it, muscle memory takes over and you just fix it without thinking about it too much.

Until you get to that muscle memory moment though, you may flounder a bit trying this and that and failing then proceeding on to a level of frustration that has you losing precious hair.

As luck would have it, this is an article about security and principals and is similar in nature to some other articles I wrote about some fundamental misconceptions about permissions here and here.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Meet Prince Apole and Rolle…

Adding a user to the db_datareader database fixed role is a pretty simple task. Most of us can likely do that in our sleep. Even using the GUI is usually pretty reliable to do that. Every now again though, somebody has decided to get tricky on us. Maybe a mistake was made somewhere in a setting on the server and nobody has caught it because nothing was “broken” – until it was.

In the aforementioned case, I was asked to help resolve the issue and I found that there was a problem in how the database owner was set. Not only was it a problem in the current database but in 12 other databases on the same server. The systems admin was at wits end. He was dealing with something that was just not in his knowledge-base yet. I remember being in the same boat – so no big deal there. We talked about some of the things he had tried and how none of it was working. I am going to recreate the same basic scenario along with some of the attempted fixes in this article.

First, we need to create a database (best to break a database designed to be broken instead of an existing one used for something else already).

That is pretty straight forward – the database will be created with the data files in the default directories on your SQL Server instance. In addition, a login called mydomain\svc_dummy will be created as a windows login.

Now let’s try to set the owner of the database and then add the svc_dummy account to the datareader role.

There is a point of interest here. I said I was going to add svc_dummy to the datareader role – not dbo. Well, I set the database owner in the preceding step to svc_dummy so it basically became dbo. When I try to perform the role addition in the GUI and then script the change, this is the script that is produced. I will show why in a few moments.

The execution of the second part of the script results in the following:

Msg 15405, Level 16, State 1, Line 18

Cannot use the special principal ‘dbo’.

That is obviously not going to work. Let’s try fixing the script and add the svc_dummy principal instead of dbo.

Unfortunately, this results in the following:

Msg 15151, Level 16, State 1, Line 22

Cannot add the principal ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

Well, maybe the problem is because the user doesn’t exist then? Let’s try to create the user and see what happens.

Now we should see this message:

Msg 15063, Level 16, State 1, Line 32

The login already has an account under a different user name.

Oy vey. We seem to be going in circles. Nothing is working. The user is there but not really there. Let’s try to drop the user and just try to clean things up and start over.

I hope you see the problem with this one. Trying to drop dbo. I dunno but we should see an error here – and we do get an error.

Msg 15150, Level 16, State 1, Line 27

Cannot drop the user ‘dbo’.

Let’s fix the user then and try to drop the svc_dummy user instead of dbo.

Which in turn generates yet another error.

Msg 15151, Level 16, State 1, Line 52

Cannot drop the user ‘mydomain\svc_dummy’, because it does not exist or you do not have permission.

If I can’t resolve the problem by changing the user in the database, maybe I can just blow it out of the water by dropping the server login.

Yet another failure message will ensue. This time the message is:

Msg 15174, Level 16, State 1, Line 55

Login ‘mydomain\svc_dummy’ owns one or more database(s). Change the owner of the database(s) before dropping the login.

So far we have been able to skirt around the problem and generate six different error messages. The last one kind of gives us the best information on what we could do to resolve the issue. The login owns a database and therefore, we need to undo that ownership. Before we do that, let’s take a look at the database principal ‘dbo’.

We already know that svc_dummy is mapped to a user in the DummyDB database. We also know that we cannot add the svc_dummy user because of that prior mapping. We have also learned that when scripting the permissions change from the gui on the svc_dummy login and then generate the script it scripts out the user ‘dbo’. Due to this, let’s look in the sys.database_principals view at the dbo user and see what it tells us.

See how the dbo database principal says it is mapped to a windows account type? With this in mind, let’s join to the sys.server_principals and see what windows account is mapped to the dbo database user.

Now we see a bit more clearly. Combined with the error messages and the principal information for both the login and the user, we have a better view of the puzzle now. Changing the database owner indeed mapped the windows account to dbo for us and is now restricting us to certain activities when trying to manage permissions for the windows login in the database. From here, we can easily fix the issue by changing the database owner, creating a user mapped to the windows login and then adding that principal to the datareader role.

And if we run that script for svc_dummy we will see a successful execution as shown here.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Quickly Change SQL Job Owners

It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.

And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.

The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.

No Expiration Date

There are two basic means to change the owner of every job on your server. Either you open each job one by one and set the owner to an acceptable principal. This method is rather tedious and you will be fighting off the boredom if you have a few hundred jobs on the server. Or, the alternative, change the job owners group by group (set-based theory). This second method can be far less tedious and far more efficient. The second method is by far my preferred method. Let’s take a look at how to make all of these changes in groups.

There are three basic sections to this script. First I fetch what should be changed, then I make the change, and lastly I verify the change. If the change doesn’t look right, then I can rollback the change. If the change is what I expected, then I can commit the change. Those are the broad strokes.

At a more detailed glimpse, I have setup a few variables to compare what I want to change, what the new job owner should be and then I fetch the sid of that new job owner. In my example, I am setting everything to ‘sa’. Why? Because it is easy for the sake of the example in the article – nothing more!

Since sometimes the owner of the job may only have access to the SQL instance via a Domain Group, I also take advantage of a couple of functions to double check that it is the correct account. These functions I am using are SUSER_SID() and SUSER_SNAME().

When all is done as I am expecting, then I should see something similar to the following.

Since the change is what I expect, then at this point I would proceed with the commit transaction statement.

The Wrap

As you can see, making job ownership changes at group scale instead of one by one is pretty easy. This only takes a matter of seconds to run against hundreds of jobs. That same kind of task done one at a time could easily take more than 40 minutes. I am not sure I want to spend that much time on such an innocuous task. I hope you are now able to use what you have learned to improve your skills and become a rock-star DBA. ENJOY!

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Use SSMS with a Different Windows Account – Back to Basics

One of the tasks I find myself doing on a fairly regular basis is running SSMS as a different Windows User. The two biggest use cases for this are: a) to test an account to prove that it is working (or not) and has the appropriate level of access, and b) to use SSMS to connect to a Domain SQL Server from a computer in a different domain (or not on the domain).

In addition to needing to do these tasks for myself, I find that I need to show somebody else how to do the same thing on a fairly consistent basis. Considering the finite keystrokes we all have (which I referenced here), it is time for me to “document” how to do this task.

I will cover two really easy and quick methods to perform this task. One from a command line and the other from the GUI. Both methods will involve a variation of the runas utility.

RUNAS

Let’s start with the easiest of the two methods. In this case, you will need to test windows account (let’s call it a domain account) from a computer which is on the same domain. This requirement allows us to take advantage of the shortcuts from within the GUI to access the runas utility.

To access the runas from Windows, one will first locate the icon for SSMS from the Start Menu, then right click that icon as shown here.

After right clicking the icon, you will see a menu pop up on the screen. Select “Run as different user” from that menu. Once you have selected the appropriate “run as” option, a login prompt will appear as shown here.

Enter the appropriate credentials at the prompt and then SSMS will launch. In this case, I may want to test the account myidomain\domain.useracc. So, I merely need to enter the domain credentials for that account. A caveat here is that the account you are testing will need to have the necessary permissions to “logon” to the workstation in order to launch the app – unlike the second method.

CMD Line

This second method has a few advantages over the GUI method with the biggest advantage being that you can use this method from any machine on the domain or even a machine not joined to the domain (so long as you have the ability to authenticate to the domain). And of course the additional advantage that the account you are testing does not require “logon” permissions on the machine you are using.

Let’s start with the basic command.

I can run that from a command line, or I can throw that into a desktop shortcut (the shortcut method is much more convenient). After I hit “enter” from the command line, I am prompted for a password for the account to be used for that session. Here’s an example of how that would look.

You won’t be able to see the password being typed (don’t fat finger the password 😉 ), but after you enter it successfully and press “enter” then you will see SSMS start to launch. After a successful SSMS launch, you should see something similar to the following:

I have a few things highlighted here of interest. First, in the red box, you will note that the user shown as connected to the server is my “local” test box account instead of the domain account. However, if I verify the authenticated account, I can see that the domain account is indeed accessing the SomeServer SQL Server (as demonstrated by the green box on the right).

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Runas should be a very common tool in the toolbox of all IT professionals – not just Data Professionals. Learning how to test different accounts is essential to being an effective and efficient professional that can provide solid results.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Single User Mode – Back to Basics

In a recent article, I took you on a trip through how to hack (ethically) a SQL Server to regain sysadmin access. In that article, I made quick mention of restarting SQL Server into single-user mode. It only makes sense to show quickly how to get into single-user mode.

Before getting into that, I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Single-User

So, what exactly is this single-user mode thing? Single-user mode is basically the official back-door into SQL Server for various reasons such as:

  • Somebody deleted all of the logins that were in the sysadmin role.
  • The sa account is disabled or the password has been forgotten.
  • Somebody deleted any Windows groups that were members of the sysadmin role.
  • All members of the sysadmin role are no longer with the company.
  • You need to restore the master database
  • You want to keep SQL Server all to yourself because you are greedy!

These are some pretty solid reasons to need to be able to use the back door. But how exactly do we get to the back door?

Two Paths

As luck would have it, there are two ways to enable single-user mode. You can either get there by making some changes for the SQL Server service in Configuration Manager, or you can utilize a command prompt. I won’t cover the gui path beyond the gentle reminder that you must remember to undo your change when using that method.

My preferred method is through the command line. Using my SQL Server 2017 as the experiment, I would navigate to the Binn directory for that instance. In this case, as shown in the next image.

Before getting too far ahead of myself, I am going to stop my SQL Server.

Notice, I also queried to find all of my services related to SQL before stopping the MSSQLServer service via the net stop mssqlserver command. We will come back to some net start and net stop commands later.

With the service successfully stopped, I can now restart the service in single-user mode.

And then the validation that we are indeed starting in single-user mode…

But wait, did you notice that bit of trickery on the startup command?

This is a pro-tip for when you must use single-user mode. Inevitably, somebody will steal the single-user connection and you will be locked out of the session. By using an app name after the single-user switch, you are telling SQL Server to only accept connections for that specific application. Since most apps will not be using sqlcmd, you will have far less contention to gain that connection and you will be able to complete your task much easier.

You could also pass something like this instead…

In this case, I would be limiting the connections to a query from SSMS (and not object explorer).

Now that I have a single-user connection, I can add a sysadmin or restore the master database or just sit on it and play devious. It all depends on what your objective for the single-user session happens to be.

More Command Line

Remember that reference to the NET commands? Well, it turns out we can also start SQL Server in single-user via net start. Let’s check it out.

The command is pretty simple:

The effect here is the same as navigating to the Binn directory and starting SQL Server with the sqlservr.exe executable. The big difference is considerably less typing and less verbose output of the service startup.

When using the net start method, you do need to know the service name of the SQL Server instance. To get that, I do recommend the following powershell script.

This will produce results similar to the following.

From the results, I can pick the SQL Server service and then pass that to the net start command fairly easily.

The Wrap

Starting SQL Server in single-user mode should be a tool every data professional holds in the bag. This is an essential tool that can be used in multiple scenarios and ensure you are able to fully maintain and control your server. I have shown how to get to single-user mode via two command line methods and mentioned a GUI method. The nice thing about the command line methods is that you don’t have to remember to undo the startup switch like you do with the GUI method.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Lost that SQL Server Access?

As a data professional can you recall the last time you needed to support a SQL Server instance for which you had no access? What if you used to have access and then that access magically disappeared?

I know I run into this dilemma more than I would probably like to. It is rather annoying to be under a crunch to rapidly provide support only to discover you are stuck and have to wait on somebody else who hopefully has access.

It’s one thing to not have access in the first place. This is usually an easy fix in most cases. The really unpleasant access issue is the one when you have confirmed prior access to the instance and then to be completely locked out. More succinctly, you have lost that SQL access!

Whoa is Me!

All hope is now lost right? OK, that isn’t entirely true. Or is it? What if everybody else from the team is also locked out and there is no known sysadmin account. In essence everybody is locked out from managing the instance and now you have a real crisis, right? Well, not so fast. You can still get back in to the instance with sysadmin access. It should be no real secret that you could always restart the SQL instance in single-user mode. Then again, that probably means bigger problems if the server is a production server and is still servicing application requests just fine.

What to do? What to do?

Restart Prohibited

If you really cannot cause a service disruption to bounce the server into single-user mode, my friend Argenis Fernandez (b | t) has this pretty nifty trick that could help you. Truth be told, I have tested that method (even on SQLExpress) several times and it is a real gem. Is this the only alternative?

Let’s back it up just a step or two first. Not having access to SQL Server is in no way the same thing as not having access to the server. Many sysadmins have access to the windows server. Many DBAs also have access to the Windows server or can at least work with the sysadmins to get access to the Windows server in cases like this. If you have admin access to windows – then not much is really going to stop you from gaining access to SQL on that same box. It is a matter of how you approach the issue. Even to restart SQL Server in single-user mode, you need to have access to the Windows server. So, please keep that in mind as you read the article by Argenis as well as the following.

Beyond the requirement of having local access to the server, one of the things that may cause heartburn for some is the method of editing the registry as suggested by Argenis. Modifying the registry (in this case) is not actually terribly complex but it is another one of those changes  that must be put back the way it was. What if there was another way?

As luck would have it, there is an alternative (else there wouldn’t be this article). It just so happens, this alternative is slightly less involved (in my opinion). Let’s start with a server where I don’t have SQL access (beyond public) but I do have Windows access.

We can see on this SQLExpress instance on the TF server that my “Jason” does not exist. Since I don’t have access, I can’t add my own account either. Time to fix that. In order to fix it, I am going to create Scheduled task in Windows that will run a SQLCMD script from my C:\Database folder. The folder can be anywhere, but I generally have one with scripts and such somewhere on each server that I can quickly access.

From here, you will want to click on the “Change User or Group” button to change it to an account that does have access to SQL Server. The account that I use is not a “user” account but rather it is a “system” account called “NT AUTHORITY\SYSTEM” that is present all the way through SQL Server 2017.

To locate the “NT AUTHORITY\SYSTEM” account, just type “SYSTEM” into the new window and click “Check Names”. The account will resolve and then you can click OK out of the “Select User or Group” window.

With the account selected that will run this task, we can now focus our attention on the guts of the task. We will now go to the “Actions” tab.

Click the new button, and here we will configure what will be done.

I do recommend putting the full path to SQLCMD into “Program/Script” box. Once entered, you will add the following to the parameter box.

If you do not have an instance, then just the server name will suffice after the -S parameter. The -i parameter specifies the path to the SQL script file that will be created and placed in the C:\database directory (or whichever directory you have chosen).

That is it for the setup of the task. Now let’s look at the guts of the script file.

Save that into a script document named myscript.sql in the aforementioned directory and then execute the windows task. After executing the Windows task, it is time to verify if it worked or not.

Boom! From no access to a sysadmin in a matter of seconds. Here is that quick verify script – generalized.

The Wrap

Losing access to a SQL instance is never a desirable situation – for the DBA. When the people that are supposed to have access, lose that access, all hope is not lost. There are plenty of methods available to regain the requisite access to manage the server. Today, I shared one such method that I view as being extremely easy. If you lose access, I would recommend taking the steps shown in this article to regain that access.

While not in the back to basics series, I do recommend checking out my other posts in that series. Some topics in the series include (but are not limited to): Backups, backup history and user logins. I would also recommend reading this audit article. If you are able to elevate your permissions, then obviously anybody with server access can elevate their permissions too. For that reason, you should regularly audit the permissions and principals in SQL Server.

Change SQL Server Collation – Back to Basics

One of my most favorite things in the world is the opportunity to deal with extremely varying database and environment requirements. Many vendors and databases seem to have a wide swath of different requirements. Some of the reasons for these requirements are absurd and some are not. That is a discussion for a different day.

When dealing with vendors, sometimes you get good documentation and requirements for the app. If you happen across one of these opportunities, you should consider buying a lottery ticket. Most of the time, the requirements and documentation are poorly assembled and then suffer from linguistic shortcomings.

What do you do when you run into poor documentation from a vendor? The typical answer would be to either call them or make a best guess (even if you call them, you are likely stuck with a best guess anyway). Then what do you do when you find that your best guess was completely wrong? Now it is time to back pedal and fix it, right?

When that mistake involves the server collation setting, the solution is simple – right? All you need to do is uninstall and reinstall SQL Server. That is the common solution and is frankly a horrific waste of time. This article will show some basics around fixing that problem quickly without a full reinstall.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Reinstall Prohibited

I am not a huge fan of wasting time doing something, especially if there is a more efficient way of achieving the same end result. I am not talking about cutting corners. If you cut corners, you likely just end up with having more work to do to fix the problems your sloppiness will have caused. That to me is not the same end result.

Having run into a bit of a problem with a vendor recently (with lacking requirements), I found myself with a server that was installed with the wrong collation instead of what the vendor wanted (never-mind they said nothing of it until a month after the server was setup and ready for them to use). The vendor needed the collation fixed immediately (basically it needed to be fixed yesterday). I really did not want to do a reinstall of the server and the sysadmins were just about to click through the uninstall and redo the install.

Oy Vey! Everybody hold up just a second here! First things first – verify with certainty there is good reason to need to change the server collation. It is perfectly legit to give the vendor the third degree here. Make sure they understand why they need the change. If they can answer the questions satisfactorily, then proceed with the change.

Next, just because the vendor says you have to uninstall/reinstall (or reboot) the server to make a certain change, does not mean they know what they are talking about. I have run into too many cases where the vendor thinks you must reboot the server to change the max memory setting in SQL Server (not true for sure).

Sure, common myth would say that you must reinstall SQL Server in order to change the default server collation. That is not entirely accurate. Reinstall is just one option that exists.

In the case of this vendor, they required that the SQL_Latin1_General_CP850_CS_AS collation be used. The server was set for SQL_Latin1_General_CP1_CI_AS. So, let’s see how we can change the collation without a reinstall.

The first thing to do is to confirm the collation we have set.

We can see from these results that indeed the collation is wrong and we need to change it in order to comply with the request from the vendor. Next we will need to stop the SQL Server services.

I think that is pretty clear there what to do. As a reminder, the preferred method to stop and start SQL Server services is via the SQL Server Configuration Manager. We won’t do every start/stop from here for this article for good reason.

Once the services are stopped, then we need to open an administrative command prompt and navigate to the SQL Server binn directory as shown here.

This is for a default instance on SQL Server 2017. If you have a named instance or a different version of SQL Server, you will need to navigate the instance folder structure for your instance.

Next is where the magic happens. We enter a command similar to this:

Here is a quick summary of those flags in this command:

[-m] single user admin mode
[-T] trace flag turned on at startup
[-q] new collation to be applied

There are more such as -s available in books online for your perusal.

If you are curious what is up with those Trace Flags, pretty simple. TF4022 is to bypass startup procs. TF3659 on the other hand is supposed to write errors to the error log (at least in theory).

When the script starts, you will see something like the next two screens:

In the first, you can see that it says it is attempting to change the collation. In the second, just before the completion message, it states that the default collation was successfully changed. Let’s close this command prompt window and then go start SQL Server and validate the change.

And that is a successful change. See how easy that is? This effort takes all of maybe 5 minutes to complete (validation, starting, stopping and so on). I would take this over a reinstall on most days.

Now that we have changed the collation, all I need to do is repeat the process to set the collation back to what it was originally (in my test lab) and make sure to bookmark the process so I can easily look it up the next time.

There is a bit of a caveat to this. On each change of the collation, I ran into permissions issues with my default logging directory (where the sql error logs are written). I just needed to reapply the permissions and it was fine after that (SQL Server would not start). That said, the permissions issue was not seen on the box related to the change for the vendor. So just be mindful of the permissions just in case.

The Wrap

Every now and again we have to deal with a sudden requirements change. When that happens, we sometimes just need to take a step back and evaluate the entire situation to ensure we are proceeding down the right path. It is better to be pensive about the course of action rather than to knee jerk into the course of action. Do you want to spend 5 minutes on the solution or 30-40 minutes doing the same thing? Changing collation can be an easy change, or it can be one met with a bit of pain doing a reinstall (more painful if more user databases are present). Keep that in mind and keep cool when you need to make a sudden change.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 1 of 4

Calendar
December 2018
M T W T F S S
« Nov    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Welcome , today is Friday, December 14, 2018