Quickly Change SQL Job Owners

It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.

And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.

The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.

No Expiration Date

There are two basic means to change the owner of every job on your server. Either you open each job one by one and set the owner to an acceptable principal. This method is rather tedious and you will be fighting off the boredom if you have a few hundred jobs on the server. Or, the alternative, change the job owners group by group (set-based theory). This second method can be far less tedious and far more efficient. The second method is by far my preferred method. Let’s take a look at how to make all of these changes in groups.

There are three basic sections to this script. First I fetch what should be changed, then I make the change, and lastly I verify the change. If the change doesn’t look right, then I can rollback the change. If the change is what I expected, then I can commit the change. Those are the broad strokes.

At a more detailed glimpse, I have setup a few variables to compare what I want to change, what the new job owner should be and then I fetch the sid of that new job owner. In my example, I am setting everything to ‘sa’. Why? Because it is easy for the sake of the example in the article – nothing more!

Since sometimes the owner of the job may only have access to the SQL instance via a Domain Group, I also take advantage of a couple of functions to double check that it is the correct account. These functions I am using are SUSER_SID() and SUSER_SNAME().

When all is done as I am expecting, then I should see something similar to the following.

Since the change is what I expect, then at this point I would proceed with the commit transaction statement.

The Wrap

As you can see, making job ownership changes at group scale instead of one by one is pretty easy. This only takes a matter of seconds to run against hundreds of jobs. That same kind of task done one at a time could easily take more than 40 minutes. I am not sure I want to spend that much time on such an innocuous task. I hope you are now able to use what you have learned to improve your skills and become a rock-star DBA. ENJOY!

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Use SSMS with a Different Windows Account – Back to Basics

One of the tasks I find myself doing on a fairly regular basis is running SSMS as a different Windows User. The two biggest use cases for this are: a) to test an account to prove that it is working (or not) and has the appropriate level of access, and b) to use SSMS to connect to a Domain SQL Server from a computer in a different domain (or not on the domain).

In addition to needing to do these tasks for myself, I find that I need to show somebody else how to do the same thing on a fairly consistent basis. Considering the finite keystrokes we all have (which I referenced here), it is time for me to “document” how to do this task.

I will cover two really easy and quick methods to perform this task. One from a command line and the other from the GUI. Both methods will involve a variation of the runas utility.

RUNAS

Let’s start with the easiest of the two methods. In this case, you will need to test windows account (let’s call it a domain account) from a computer which is on the same domain. This requirement allows us to take advantage of the shortcuts from within the GUI to access the runas utility.

To access the runas from Windows, one will first locate the icon for SSMS from the Start Menu, then right click that icon as shown here.

After right clicking the icon, you will see a menu pop up on the screen. Select “Run as different user” from that menu. Once you have selected the appropriate “run as” option, a login prompt will appear as shown here.

Enter the appropriate credentials at the prompt and then SSMS will launch. In this case, I may want to test the account myidomain\domain.useracc. So, I merely need to enter the domain credentials for that account. A caveat here is that the account you are testing will need to have the necessary permissions to “logon” to the workstation in order to launch the app – unlike the second method.

CMD Line

This second method has a few advantages over the GUI method with the biggest advantage being that you can use this method from any machine on the domain or even a machine not joined to the domain (so long as you have the ability to authenticate to the domain). And of course the additional advantage that the account you are testing does not require “logon” permissions on the machine you are using.

Let’s start with the basic command.

I can run that from a command line, or I can throw that into a desktop shortcut (the shortcut method is much more convenient). After I hit “enter” from the command line, I am prompted for a password for the account to be used for that session. Here’s an example of how that would look.

You won’t be able to see the password being typed (don’t fat finger the password ūüėČ ), but after you enter it successfully and press “enter” then you will see SSMS start to launch. After a successful SSMS launch, you should see something similar to the following:

I have a few things highlighted here of interest. First, in the red box, you will note that the user shown as connected to the server is my “local” test box account instead of the domain account. However, if I verify the authenticated account, I can see that the domain account is indeed accessing the SomeServer SQL Server (as demonstrated by the green box on the right).

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Runas should be a very common tool in the toolbox of all IT professionals – not just Data Professionals. Learning how to test different accounts is essential to being an effective and efficient professional that can provide solid results.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Single User Mode – Back to Basics

In a recent article, I took you on a trip through how to hack (ethically) a SQL Server to regain sysadmin access. In that article, I made quick mention of restarting SQL Server into single-user mode. It only makes sense to show quickly how to get into single-user mode.

Before getting into that, I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog Рhere.

Single-User

So, what exactly is this single-user mode thing? Single-user mode is basically the official back-door into SQL Server for various reasons such as:

  • Somebody deleted all of the logins that were in the sysadmin role.
  • The sa account is disabled or the password has been forgotten.
  • Somebody deleted any Windows groups that were members of the sysadmin role.
  • All members of the sysadmin role are no longer with the company.
  • You need to restore the master database
  • You want to keep SQL Server all to yourself because you are greedy!

These are some pretty solid reasons to need to be able to use the back door. But how exactly do we get to the back door?

Two Paths

As luck would have it, there are two ways to enable single-user mode. You can either get there by making some changes for the SQL Server service in Configuration Manager, or you can utilize a command prompt. I won’t cover the gui path beyond the gentle reminder that you must remember to undo your change when using that method.

My preferred method is through the command line. Using my SQL Server 2017 as the experiment, I would navigate to the Binn directory for that instance. In this case, as shown in the next image.

Before getting too far ahead of myself, I am going to stop my SQL Server.

Notice, I also queried to find all of my services related to SQL before stopping the MSSQLServer service via the net stop mssqlserver command. We will come back to some net start and net stop commands later.

With the service successfully stopped, I can now restart the service in single-user mode.

And then the validation that we are indeed starting in single-user mode…

But wait, did you notice that bit of trickery on the startup command?

This is a pro-tip for when you must use single-user mode. Inevitably, somebody will steal the single-user connection and you will be locked out of the session. By using an app name after the single-user switch, you are telling SQL Server to only accept connections for that specific application. Since most apps will not be using sqlcmd, you will have far less contention to gain that connection and you will be able to complete your task much easier.

You could also pass something like this instead…

In this case, I would be limiting the connections to a query from SSMS (and not object explorer).

Now that I have a single-user connection, I can add a sysadmin or restore the master database or just sit on it and play devious. It all depends on what your objective for the single-user session happens to be.

More Command Line

Remember that reference to the NET commands? Well, it turns out we can also start SQL Server in single-user via net start. Let’s check it out.

The command is pretty simple:

The effect here is the same as navigating to the Binn directory and starting SQL Server with the sqlservr.exe executable. The big difference is considerably less typing and less verbose output of the service startup.

When using the net start method, you do need to know the service name of the SQL Server instance. To get that, I do recommend the following powershell script.

This will produce results similar to the following.

From the results, I can pick the SQL Server service and then pass that to the net start command fairly easily.

The Wrap

Starting SQL Server in single-user mode should be a tool every data professional holds in the bag. This is an essential tool that can be used in multiple scenarios and ensure you are able to fully maintain and control your server. I have shown how to get to single-user mode via two command line methods and mentioned a GUI method. The nice thing about the command line methods is that you don’t have to remember to undo the startup switch like you do with the GUI method.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Lost that SQL Server Access?

As a data professional can you recall the last time you needed to support a SQL Server instance for which you had no access? What if you used to have access and then that access magically disappeared?

I know I run into this dilemma more than I would probably like to. It is rather annoying to be under a crunch to rapidly provide support only to discover you are stuck and have to wait on somebody else who hopefully has access.

It’s one thing to not have access in the first place. This is usually an easy fix in most cases. The really unpleasant access issue is the one when you have confirmed prior access to the instance and then to be completely locked out. More succinctly, you have lost that SQL access!

Whoa is Me!

All hope is now lost right? OK, that isn’t entirely true. Or is it? What if everybody else from the team is also locked out and there is no known sysadmin account. In essence everybody is locked out from managing the instance and now you have a real crisis, right? Well, not so fast. You can still get back in to the instance with sysadmin access. It should be no real secret that you could always restart the SQL instance in single-user mode. Then again, that probably means bigger problems if the server is a production server and is still servicing application requests just fine.

What to do? What to do?

Restart Prohibited

If you really cannot cause a service disruption to bounce the server into single-user mode, my friend Argenis Fernandez (b | t) has this pretty nifty trick that could help you. Truth be told, I have tested that method (even on SQLExpress) several times and it is a real gem. Is this the only alternative?

Let’s back it up just a step or two first. Not having access to SQL Server is in no way the same thing as not having access to the server. Many sysadmins have access to the windows server. Many DBAs also have access to the Windows server or can at least work with the sysadmins to get access to the Windows server in cases like this. If you have admin access to windows – then not much is really going to stop you from gaining access to SQL on that same box. It is a matter of how you approach the issue. Even to restart SQL Server in single-user mode, you need to have access to the Windows server. So, please keep that in mind as you read the article by Argenis as well as the following.

Beyond the requirement of having local access to the server, one of the things that may cause heartburn for some is the method of editing the registry as suggested by Argenis. Modifying the registry (in this case) is not actually terribly complex but it is another one of those changes  that must be put back the way it was. What if there was another way?

As luck would have it, there is an alternative (else there wouldn’t be this article). It just so happens, this alternative is slightly less involved (in my opinion). Let’s start with a server where I don’t have SQL access (beyond public) but I do have Windows access.

We can see on this SQLExpress instance on the TF server that my “Jason” does not exist. Since I don’t have access, I can’t add my own account either. Time to fix that. In order to fix it, I am going to create Scheduled task in Windows that will run a SQLCMD script from my C:\Database folder. The folder can be anywhere, but I generally have one with scripts and such somewhere on each server that I can quickly access.

From here, you will want to click on the “Change User or Group” button to change it to an account that does have access to SQL Server. The account that I use is not a “user” account but rather it is a “system” account called “NT AUTHORITY\SYSTEM” that is present all the way through SQL Server 2017.

To locate the “NT AUTHORITY\SYSTEM” account, just type “SYSTEM” into the new window and click “Check Names”. The account will resolve and then you can click OK out of the “Select User or Group” window.

With the account selected that will run this task, we can now focus our attention on the guts of the task. We will now go to the “Actions” tab.

Click the new button, and here we will configure what will be done.

I do recommend putting the full path to SQLCMD into “Program/Script” box. Once entered, you will add the following to the parameter box.

If you do not have an instance, then just the server name will suffice after the -S parameter. The -i parameter specifies the path to the SQL script file that will be created and placed in the C:\database directory (or whichever directory you have chosen).

That is it for the setup of the task. Now let’s look at the guts of the script file.

Save that into a script document named myscript.sql in the aforementioned directory and then execute the windows task. After executing the Windows task, it is time to verify if it worked or not.

Boom! From no access to a sysadmin in a matter of seconds. Here is that quick verify script – generalized.

The Wrap

Losing access to a SQL instance is never a desirable situation – for the DBA. When the people that are supposed to have access, lose that access, all hope is not lost. There are plenty of methods available to regain the requisite access to manage the server. Today, I shared one such method that I view as being extremely easy. If you lose access, I would recommend taking the steps shown in this article to regain that access.

While not in the back to basics series, I do recommend checking out my other posts in that series. Some topics in the series include (but are not limited to): Backups, backup history and user logins. I would also recommend reading this audit article. If you are able to elevate your permissions, then obviously anybody with server access can elevate their permissions too. For that reason, you should regularly audit the permissions and principals in SQL Server.

Change SQL Server Collation – Back to Basics

One of my most favorite things in the world is the opportunity to deal with extremely varying database and environment requirements. Many vendors and databases seem to have a wide swath of different requirements. Some of the reasons for these requirements are absurd and some are not. That is a discussion for a different day.

When dealing with vendors, sometimes you get good documentation and requirements for the app. If you happen across one of these opportunities, you should consider buying a lottery ticket. Most of the time, the requirements and documentation are poorly assembled and then suffer from linguistic shortcomings.

What do you do when you run into poor documentation from a vendor? The typical answer would be to either call them or make a best guess (even if you call them, you are likely stuck with a best guess anyway). Then what do you do when you find that your best guess was completely wrong? Now it is time to back pedal and fix it, right?

When that mistake involves the server collation setting, the solution is simple – right? All you need to do is uninstall and reinstall SQL Server. That is the common solution and is frankly a horrific waste of time. This article will show some basics around fixing that problem quickly without a full reinstall.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Reinstall Prohibited

I am not a huge fan of wasting time doing something, especially if there is a more efficient way of achieving the same end result. I am not talking about cutting corners. If you cut corners, you likely just end up with having more work to do to fix the problems your sloppiness will have caused. That to me is not the same end result.

Having run into a bit of a problem with a vendor recently (with lacking requirements), I found myself with a server that was installed with the wrong collation instead of what the vendor wanted (never-mind they said nothing of it until a month after the server was setup and ready for them to use). The vendor needed the collation fixed immediately (basically it needed to be fixed yesterday). I really did not want to do a reinstall of the server and the sysadmins were just about to click through the uninstall and redo the install.

Oy Vey! Everybody hold up just a second here! First things first – verify with certainty there is good reason to need to change the server collation. It is perfectly legit to give the vendor the third degree here. Make sure they understand why they need the change. If they can answer the questions satisfactorily, then proceed with the change.

Next, just because the vendor says you have to uninstall/reinstall (or reboot) the server to make a certain change, does not mean they know what they are talking about. I have run into too many cases where the vendor thinks you must reboot the server to change the max memory setting in SQL Server (not true for sure).

Sure, common myth would say that you must reinstall SQL Server in order to change the default server collation. That is not entirely accurate. Reinstall is just one option that exists.

In the case of this vendor, they required that the SQL_Latin1_General_CP850_CS_AS collation be used. The server was set for¬†SQL_Latin1_General_CP1_CI_AS. So, let’s see how we can change the collation without a reinstall.

The first thing to do is to confirm the collation we have set.

We can see from these results that indeed the collation is wrong and we need to change it in order to comply with the request from the vendor. Next we will need to stop the SQL Server services.

I think that is pretty clear there what to do. As a reminder, the preferred method to stop and start SQL Server services is via the SQL Server Configuration Manager. We won’t do every start/stop from here for this article for good reason.

Once the services are stopped, then we need to open an administrative command prompt and navigate to the SQL Server binn directory as shown here.

This is for a default instance on SQL Server 2017. If you have a named instance or a different version of SQL Server, you will need to navigate the instance folder structure for your instance.

Next is where the magic happens. We enter a command similar to this:

Here is a quick summary of those flags in this command:

[-m] single user admin mode
[-T] trace flag turned on at startup
[-q] new collation to be applied

There are more such as -s available in books online for your perusal.

If you are curious what is up with those Trace Flags, pretty simple. TF4022 is to bypass startup procs. TF3659 on the other hand is supposed to write errors to the error log (at least in theory).

When the script starts, you will see something like the next two screens:

In the first, you can see that it says it is attempting to change the collation. In the second, just before the completion message, it states that the default collation was successfully changed. Let’s close this command prompt window and then go start SQL Server and validate the change.

And that is a successful change. See how easy that is? This effort takes all of maybe 5 minutes to complete (validation, starting, stopping and so on). I would take this over a reinstall on most days.

Now that we have changed the collation, all I need to do is repeat the process to set the collation back to what it was originally (in my test lab) and make sure to bookmark the process so I can easily look it up the next time.

There is a bit of a caveat to this. On each change of the collation, I ran into permissions issues with my default logging directory (where the sql error logs are written). I just needed to reapply the permissions and it was fine after that (SQL Server would not start). That said, the permissions issue was not seen on the box related to the change for the vendor. So just be mindful of the permissions just in case.

The Wrap

Every now and again we have to deal with a sudden requirements change. When that happens, we sometimes just need to take a step back and evaluate the entire situation to ensure we are proceeding down the right path. It is better to be pensive about the course of action rather than to knee jerk into the course of action. Do you want to spend 5 minutes on the solution or 30-40 minutes doing the same thing? Changing collation can be an easy change, or it can be one met with a bit of pain doing a reinstall (more painful if more user databases are present). Keep that in mind and keep cool when you need to make a sudden change.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

SQL Server Extended Availability Groups

Comments: 1 Comment
Published on: April 1, 2018

It may come as no surprise to many that Microsoft has hastened the SQL Server development cycle. Furthermore, it may be no surprise to many that Microsoft has also hastened the patch cycle for SQL Server.

If you were unaware of this, consider this as your notice that Microsoft has indeed hastened the patch cycle. Not only has the patch cycle become more rapid, the idea of Service Packs is more or less a notion of history at this point. Critical Updates (or CUs) is the new norm. This is a pretty good thing due in large part to the rapid improvements that can be made to the product due to Azure.

With all of this considered now, there is some really awesome news. A hint to this awesome news is in the preceding image and title of this post. In a recent CU for SQL Server 2017, Availability Groups and Extended Events both have seen massive upgrades. The upgrades are so big in fact that it is mind blowing. These upgrades were no small feat by any means and it took some major investment and cooperation from the likes of some well known competitors.

Upgrade the first: Availability Groups have now been extended to be able to include nodes from MySQL, PostGres and MariaDB. Frankly, I don’t understand the MariaDB move there but it’s all good. I am 100% on board with the MySQL addition and may have to work really hard to find a use case to include PostGres.

Imagine the realm of possibility this change brings!! First we got SQL Server on Linux and now we can include a predominantly Linux flavored DBMS in a SQL Server High Availability solution. LAMP engineers have got to be losing their gourds right about now over this. Microsoft is taking away every anti-MS premise that has been used in recent years and turning the world on its ears to become more global and reachable in the architecture and DBMS world.

Upgrade the second: In order to help support and troubleshoot AGs on these other platforms, we need some tools. The tools of choice happen to be in the form of Extended Events. While there is nothing quite yet in place on these other platforms to properly monitor an AG, XE is able to capture some MySQL, PostGres and MariaDB information as transmitted across the wire when these platforms are added to an AG. How COOL is that?

If you are really chomping at the bit, I recommend procuring the latest CU that was recently released. You can find that CU from this Microsoft site here.

The Wrap

I am in full support of this new direction from Microsoft. Partnering with other large platforms to provide a supremely improved overall product is very next level type of stuff and frankly quite unheard of in this ultra competitive world. It is so unheard of in fact that this was a nicely crafted April Fools joke. Happy April Fools Day!

Having mentioned Extended Events, if you are interested, I do recommend a serious read from any number of articles posted in 60 day series.

Extended Events File Initialization Failure

It should come as no surprise that I write a lot of articles about Extended Events (XE). This happens to be another article on Extended Events. Truth be told, this article is hopefully something that is more of an edge case scenario. Well, I sure hope that is the case and that it is not a common problem.

One of the recommended methods to trap payload data in an XE session is via the use of the event_file target. Sending data to a file has numerous benefits such as being able to take the trace and evaluate the trace file from a different machine (locally to that machine).

Every once in a blue moon you just may run into various issues with the event_file such as explained here or here. Though slightly different, the net effect is quite similar and should be treated with roughly the same kind of troubleshooting steps.

Configuration Error

As luck would have it, I ran into one of these rare opportunities to troubleshoot an error occurring on a client server. Truth be told, I was unfamiliar with the actual error at first. Here is that error.

Error: 25602, Severity: 17, State: 22.

The preceding error was scraped out of the SQL Server error log. Obviously a little more detail was needed because this error is far from useful without that detail. Looking a little deeper, I found some errors like this.

Msg 25602, Level 17, State 22, Line 43
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”,
encountered a configuration error during initialization. Object cannot be added to the event session.

Some very good clues are actually contained in that particular message. Some of these clues include the following: a) the term “target”, b) the term “event_file”, and c) the phrase “event session.” Ok, I get it at this point. One of my Extended Event Sessions I had put on the server and used previously was broken. But, since it had been working and I know I had fetched data from it, I found myself puzzled as to why it might be busted.

The next logical thing to do at this point was to test the various sessions that are stopped and try to figure out which one is causing the problem and see if the error is reproduced. Finally upon finding the session that is failing, I ran into the complete message.

Msg 25602, Level 17, State 22, Line 43
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”,
encountered a configuration error during initialization. Object cannot be added to the event session.
The operating system returned error 3: ‘The system cannot find the path specified.
‘ while creating the file ‘C:\Database\XEDROPME\SVRLoginAudit_0_131650006658030000.xel’.

The additional info¬†that I needed is in bold text in the previous text. So, for some reason, there is a problem with the path for the XE trace file output. Going out to the file system to check it out, I found that the client in this case decided to delete the entire folder. How does that happen? Well, it does! When it happens, the XE traces will start to fail and you will no longer capture the intended trace data. Let’s take a look at a simulated reproduction of this issue.

First, I will create a session and then start the session, then validate the session exists and then stop the session.

And here is what I see on my test server when I validate the session exists.

Perfect so far. Now, let’s make that trace output directory disappear. For this demo, you might note that I had created the directory as “C:\Database\XEDROPME”. My intent in the name was obviously to notify the world that the folder was to be dropped.

That statement is easy enough and is performed from my test environment for those getting weary of the use of xp_cmdshell. Now, let’s try to start that session that we knew was previously running.

I re-formatted the output of the error for ease of readability. Otherwise, the output in the preceding image is what will happen when the output directory is dropped. The fix is rather simple at this point – put the folder structure back into place. To read an introductory post about checking if a session exists or not on your server, check this out (a more advanced post is coming soon).

Conclusion

From time to time we will run into various problems supporting Extended Events. This is bound to happen more frequently as we support more varied environments with more hands in the kitchen (so to speak). We need to learn that even small changes can have a ripple effect to other things that may be running on the server. It is worthwhile to perform a little due diligence and clean things up as we make changes – or at minimum to observe the system for a time to ensure no unintended consequences have occurred.

Extended Events is a powerful tool to help in troubleshooting and tuning your environment. I recommend investing a little time in reading the 60 day series about Extended Events. This is not a short series but is designed to provide an array of topics to help learn the tool over time.

Profiler for Extended Events: Quick Settings

Not long ago, I wrote a rather long article about a new-ish feature within SQL Server Management Studio (SSMS) that impacted Extended Events. You can read that book – here! The XEvents Profiler feature is one of those things that you may or may not use. If you consider using the feature, I do believe it is important that you research it a bit and try to learn the pros and cons first.

With that there is a little more about the feature that the aforementioned book did not cover. In fact, this information has pretty much gone ignored and mostly stays hidden under the covers.

Settings

As of SSMS 17.4 we have been given the ability to control XEvents Profiler just a tiny bit more. For what it is worth, we as Database Professionals love to be able to control our database environment. So this teeny tiny bit of new control ability is potentially a huge win, right?

If you are the controlling type, or maybe just the curious type, you will be pleased to know that under “Options” from the Tools menu in SSMS, Microsoft has tucked some new control options to help you configure XEvents Profiler – to a degree. If you open options, you will see this new node.

If you expand the “XEvent Profiler” node (circled in red), you will discover the “options” node. If you click on this “options” node and do a quick comparison (in SSMS 17.4 and SSMS 17.5) you will also find that you don’t need t expand the “XEvent Profiler” node at all because the options are listed in the right hand pane for both nodes and they are exactly the same. So, choose one or the other and you will end up at the same place.

The options that you currently have are:

  • Stop Session on Viewer Closed
  • Toolbar commands stop and restart

You can either set these options to True or False. I recommend you play with them a bit to discover which you really prefer. That said, I do prefer to have the “Stop Session on Viewer Closed” set to true. There is “profiler” in the name of the feature afterall. And if you have read the “book” I wrote about this feature, you would know that the filtering offered by the default sessions of this feature basically turn on the fire hose effect and can have a negative impact on your server. Are you sure you want a profiler style fire hose running on your production server?

Conclusion

There surely will continue to be more development around this idea of an XE style profiler. More development generally means that the product will mature and get better over time. This article shows how there is more being added to the feature to try and give you better control over the tool. We love control so the addition of these options is actually a good thing. Is it enough to sway me away from using the already established, more mature, and high performing tools that have been there for several generations? Nope! I will continue to use TSQL and the GUI tools available for XE that predated the XEvent Profiler.

Some say this is a way of bridging the gap. In my opinion, that gap was already bridged with the GUI that has been available for several years. Some say that maybe this tool needs to integrate a way to shred XML faster. To that, I say there are methods already available for that such as Powershell, the live data viewer, the Target Data viewer, or even my tools I have provided in the 60 day series.

I would challenge those that are still unfamiliar with the XE GUI (out for nearly 6 years now) to go and read some of my articles or articles by Jonathan Kehayias about the power that is in XE as well as some of the power in the GUI.

SQL Server User Already Exists – Back to Basics

One of my all-time favorite things in SQL Server is security. No matter what, it always seems that there is a new way to abuse permissions. When people abuse their access level or abuse the way permissions should be set in a SQL Server environment, we get the pleasure of both fixing it and then trying to educate them on why what they did was wrong and how to do it the right way.

In similar fashion, I previously wrote about some fundamental misconceptions about permissions here and here. I have to bring those specific articles up because this latest experience involves the basics discussed in those articles along with a different twist.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Gimme Gimme Gimme…

It is not uncommon to need to create a login and grant that login access to a database (or associate that login to a database user. In fact, that is probably a fairly routine process. It is so routine, that I have a demo script for it right here.

I even went as far as to include some of the very routine mistakes I see happening on a frequent basis (as referenced by a prior post here).

To this point, we only have a mild abuse of how to set permissions for a principal. Now it is time for that twist I mentioned. This user account needs to be created on a secondary server that is participating in either a mirror or an Availability Group. Most people will take that user account that was just created on the first server and then use the same script to add the account to the secondary server. Let’s see how that might look.

For this example, I will not go to the extent of creating the mirror or AG. Rather, I will pretend I am just moving the database to a new server. So I have taken a backup and then I will restore the database to the new server.

Next, let’s go ahead and recreate the login we created on the previous server.

You see here that I am only going to create the login if it does not exist already. Running the script produces the following for me.

Now, let’s deviate a bit and grant permissions for the login just like so many administrators will do.

It seems pretty apparent that my login that I just created does not have access to the GimmeSA database, right? Let’s go ahead and add permissions to the GimmeSA database and see what happens.

Well, that did not work according to plan right? Enter twist the second.

What I am seeing more and more of, is people at this point will just grant that login (that was just created) sysadmin rights. You can pick up your jaw now. Indeed! People are just granting the user SA permissions and calling it good. This practice will certainly work – or appear to work. The fact is, the problem is not fixed. This practice has only camouflaged the problem and it will come back at some future date. That date may be when somebody like me comes along and starts working on stripping non-essential sysadmins from the system.

There are two legitimate fixes for this particular problem (and no granting sysadmin is definitely not one of them). First you can run an orphan fix with a script such as this one by Ted Krueger. That will map the user that already exists in the database to the login principal (thus the reason for the error we saw). Or, you can prep your environment better by using the SID syntax with the create login as follows.

The trick here is to go and lookup the SID for the login on the old server first and then use that sid to create the login on the new server. This will preserve the user to login mappings and prevent the orphan user issue we just saw. It will also prevent the band-aid need of adding the login to the sysadmin server role.

The Wrap

In this article I have introduced you to some basics in regards to creating and synchronizing principals across different servers. Sometimes we try to shortcut the basics and apply band-aids that make absolutely no sense from either a practical point of view or a security point of view. Adhering to better practices will ease your administration burden along with improving your overall security presence.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Database Corruption and IO Errors

A quick way to have your day turned upside down and rip your gut out with nerves and anxiety is to come in one day to find that users are panicked, applications are not working and the HelpDesk team is curled up in the fetal position in the corner. Why? The sky is falling and everybody thinks the database has blown up.

Calmly, you settle in and check the server and eventually find your way to the error logs to see the following:

Msg 823, Level 24, State 2, Line 1

The operating system returned error 1(Incorrect function.) to SQL Server during a read at offset 0x0000104c05e000 in file ‘E:\Database\myproddb.mdf’. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Suddenly you understand and feel the collective fear and paranoia. What do you do now that the world has seemingly come to an end for your database?

Corruption

What exactly does this error message mean? Well, typically, an 823 error is a very strong indicator that there is some sort of problem with the storage system, hardware or driver that is in the path of the I/O request.

Great! That fear is getting a little heavier knowing what the error represents. This doesn’t bode well for the database. Let’s go ahead and crack out the list of what we can do or check when a problem like this hits:

  1. Check msdb.dbo.suspect_pages
  2. Run a consistency check for all databases on the same volume
  3. Check Logs (SQL, Windows, Storage system) to see if there may be additional info (via different errors/warnings) in close proximity to the 823 error.
  4. Check your drivers
  5. Restore the database

This is where your experience, training, and preparedness come in handy. An experienced data professional will be prepared with database backups (including log backups). So you are not concerned here because all of your backups are reporting successful. As you prep to pull the backups (for the past couple of days just in case) you notice that there are no available backups in your repository. Looking closer at your backup jobs you discover that the backups completed in mere seconds where they normally take hours for this database.

Now that your heart is racing, forehead is beading up with sweat, gut is sinking and the fear is very palpable – what do you do? Time to step through the rest of the steps and pull out your lucky charms, right?

Querying against suspect_pages, you find the table to be completely empty. You know that checkdb runs regularly but maybe it didn’t run last night. That is easy enough to check with a little query from here. Since a consistency check does not seem to have run (as confirmed by the script) and is the second item on the checklist, let’s go ahead and run it now.

Msg 0, Level 11, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Msg 0, Level 20, State 0, Line 0

A severe error occurred on the current command.  The results, if any, should be discarded.

Crud. Blood pressure and nerves are getting a little more frazzled now. Maybe we can cycle through the database and find which table is causing the problem. Let’s try a checktable of every table in the database. Before doing the checktable, one more check against suspect_pages still shows no rows to be found.

Running the checktable, every table is coming up clean except one. That one table produces the same sort of error as the checkdb and just so happens to be the largest and most critical table to the database. Blood pressure is closing in on critical now. We have a corruption issue that is severe enough that checktable cannot complete, we know we have 823 errors and some sort of IO issue and do not have a backup.

Wait…backup. Let’s try to force a backup and see what happens. We can tell the backup to continue after error so let’s see what happens. Maybe that will allow you to move the database to a different server or different spindles to try and just recover the data.

Msg 3202, Level 16, State 2, Line 1

Write on “E:\SQLBackups\myproddb.bak” failed: 1(Incorrect function.)

Msg 3013, Level 16, State 1, Line 1

BACKUP DATABASE is terminating abnormally.

The situation just does not want to get any better at this point. Time for drastic measures – shut down the SQL Server services and try to xcopy the data and log files to a different server and try to re-attach from there. Anything is worth a shot, right?

Error 1: Incorrect Function

Glad to know it is not just SQL Server throwing the errors – sorta. The corruption is ever present and there is nothing more that can be done, right? All hope is lost. Time to fill out the resume and move on to something else, right? Give it one more shot. A spark of insanity hits and you wonder if a mere query against the clustered index will work, if not then maybe something against any of the indexes to try and spare any data at all.

You rub your luck horseshoe and query the table (yes the table that checktable aborts because of corruption). Lo and behold you get results from this very simple query. How is that possible. On a whim, you drop all the Non-Clustered Indexes and try a fresh backup.

Hallelujah! The backup completes without error. Time to take this backup and restore the database to a completely different server. Then on the new server run a consistency check to determine if it is all clear. To your liking, there is absolutely no corruption at this point so the non-clustered indexes can be recreated (easy to do because you have a script with the index definitions handy).

Wrap

This journey from fire and brimstone and the world ending to the epiphany and then the sweet euphoric feelings of success is not a normal resolution for these types of errors. This sort of thing happened for a client that called trying to get around the 823 errors. We had absolutely no indication whatsoever of where the corruption was beyond knowing we had failing disks at the time. We got lucky in that the non-clustered indexes in this case ended up being stored on the bad sectors and dropping those indexes allowed us to recover the database and make it usable.

When everything is breaking, the unconventional thought (especially without having a checkdb complete to tell you which index or which pages are corrupt) of dropping indexes may just save your bacon. It saved my clients bacon!

As a follow-up item, it is important to regularly check the dbccLastKnownGood for each database. That can be done by following the script in this article.

«page 1 of 12

Calendar
September 2018
M T W T F S S
« Jul    
 12
3456789
10111213141516
17181920212223
24252627282930

Welcome , today is Tuesday, September 18, 2018