Quickly Change SQL Job Owners

It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.

And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.

The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.

No Expiration Date

There are two basic means to change the owner of every job on your server. Either you open each job one by one and set the owner to an acceptable principal. This method is rather tedious and you will be fighting off the boredom if you have a few hundred jobs on the server. Or, the alternative, change the job owners group by group (set-based theory). This second method can be far less tedious and far more efficient. The second method is by far my preferred method. Let’s take a look at how to make all of these changes in groups.

There are three basic sections to this script. First I fetch what should be changed, then I make the change, and lastly I verify the change. If the change doesn’t look right, then I can rollback the change. If the change is what I expected, then I can commit the change. Those are the broad strokes.

At a more detailed glimpse, I have setup a few variables to compare what I want to change, what the new job owner should be and then I fetch the sid of that new job owner. In my example, I am setting everything to ‘sa’. Why? Because it is easy for the sake of the example in the article – nothing more!

Since sometimes the owner of the job may only have access to the SQL instance via a Domain Group, I also take advantage of a couple of functions to double check that it is the correct account. These functions I am using are SUSER_SID() and SUSER_SNAME().

When all is done as I am expecting, then I should see something similar to the following.

Since the change is what I expect, then at this point I would proceed with the commit transaction statement.

The Wrap

As you can see, making job ownership changes at group scale instead of one by one is pretty easy. This only takes a matter of seconds to run against hundreds of jobs. That same kind of task done one at a time could easily take more than 40 minutes. I am not sure I want to spend that much time on such an innocuous task. I hope you are now able to use what you have learned to improve your skills and become a rock-star DBA. ENJOY!

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

T-SQL Tuesday #104: Just Can’t Cut That Cord

We all have our favorite scripts, tools or utilities. Those are the things that help make our jobs easier. Some of us may have an unhealthy relationship with some of those scripts (similar in nature to the relationship many have with their phone). Whether or not the need to cut that proverbial cord exists, today we are not discussing the health of that dependence. Suffice it to say, sometimes we simply need to upgrade our scripts. How else can we get better scripts or make our scripts better – by sharing them.

This is precisely the goal Bert Wagner (b | t) seems to have envisioned for the 104th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

For this month’s T-SQL Tuesday, I want you to write about code you’ve written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn’t need to be completely original either – maybe you’ve improved the code in some open source project to better solve the problem for your particular situation.”

There is a high probability that through the sharing of your script, somebody out there can benefit from that script. In addition, it is very likely that somebody will make a suggestion to help make your script better. Worst case (emphasis on worst case here), you have the script stored somewhere with half decent instructions on what it does and making it easily accessible for you to use again and again. Just in case you forget you have it out there – you can google for it again and find it on your own blog ;).

Personally, I have been able to find and re-use some of my older scripts. Not only do I get to re-discover them, but I also get to re-imagine a new use or improvement for the script.

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Easy Access

While pondering the topic for today, I had the thought occur about how frequently I post a script on my blog already anyway. An easy out for this topic would have been to re-share one of those old scripts. For instance, I could easily redo a recent article about server access that has a couple scripts demonstrated in it. Or I could go back a few years to my articles about foreign keys (here or here) and space use (here or here). Even more intriguing could be to re-envision some of my articles on Extended Events. But where would the fun in that be?

Rather than take the easy road and rehash something, I have something different. This one goes hand in hand with the numerous articles and scripts I have previously provided on auditing – yet it is different.

Not every shop can afford third party software or even Enterprise edition and so they have to come up with a different way to audit their database instances. One of the problems with a home grown solution is to ensure the data is not stored local to the server (lots of good reasons for that). Here is an example of what I did for one client that happened to have a developer that found a back door that was giving him SA access to the SQL Server Instance and was changing things and trying to cover his tracks – even after being warned.

First the query

This query will be run from a job on a different server that is restricted in access to just a select few people. I do rely on the use of the default trace in this query. I am also reliant upon a little bit of sneaky behavior. If I run this from a separate server, prying eyes are usually unlikely to find that it is running and thus makes it easier to catch them red-handed. In addition, if they discover via some sort of trace and by a lot of luck that it is running, then they have no access to the remote server to alter anything that was captured.

The query does go out to the default trace and pull back any changes to permissions or principals on the server in question. The captured data is then stored in a database that is also restricted to a select few people. Lastly, the captured data can be routinely queried, or automated reports can be created to send email notifications of changes encountered.

The second part of the trickery here is that I am using a linked server to perform the queries (a slight change and I could also do this via powershell which will be shown in a future article). The linked server query uses the openquery format and sends the default trace query to the remote server. Since I am running this from a job on an administrative server that pulls a limited data set, I am not overly concerned with the linked server setup here.

Storing It

Once I query the data, I need to put it somewhere on my administrative server. The table setup for that is very straight forward.

After creating this table, I am ready to store the data. All I need to do is throw the audit query into an agent job and schedule it to run on a regular schedule. For my purposes, I usually only run it once a day.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Summiting that Technical Challenge Part II

Comments: No Comments
Published on: July 6, 2018

Conquering Challenges

Months ago, I posted an article about some of the challenges encountered while migrating from one service level to another for my blog.

As it turns out, I had some long lingering effects that I was delaying fixing because I didn’t want to have an apparent flood of republished posts.

What could possibly be wrong after fixing everything I listed in the aforementioned article? Good thing you asked. Every occurrence of a percent symbol was changed to a new string – “{529e71a51265b45c1f7f96357a70e3116ccf61cf0135f67b2aa293699de35170}”.

Things that make you go huh?

As you can see, this string is hardly useful. I started noticing this string in several articles and it took a while to figure out what the heck was going on. It wasn’t until I was reviewing an old post that had a SQL Script in the post that contained a “LIKE” keyword. An example of one such article is this one about SQL Trace. At that point, the lightbulb went off and I decided to give it a try. Lo and behold, that string truly did represent a percent symbol. I made the appropriate changes and voila, the post looking perfect all over again.

That was one measly little post. I have ~100 articles affected by this problem. Obviously that becomes tedious to change over and over and over. In addition, if I do change them manually one by one, I end up with ~100 articles triggering social media alerts that makes it look like I am re-publishing a bunch of stuff when really it is just editing and fixing the articles. What to do? What to do?

Thank goodness I have half an idea how to update data in a database and my blog is stored in a database. I can run a mass update to replace that string properly wherever it occurs. Let’s try it.

First, just one article to validate…

And if I look at the post after the update, I can confirm that the update succeeded (in my staging database first and foremost). Next, I roll that change to my production database and test again.

After I can confirm that it worked there (you can also see that it worked via this link), then I can work on the mass update.

Now, when I check my blog for that string, I find that there is no more residue left and all seems to be back to being correct.

To fix the string problem, I did use the replace function. Since this is a MySQL database, I had to use that version of it. Notice it isn’t terribly different from the TSQL replace function.

The Wrap

I did not enjoy this journey much at all. Much of the experience was due to outside forces. I can’t do much to control them, but I can do something to fix the net effect of what they caused. I am sure that with all of the problems encountered in that migration, I will find something further that needs to be fixed.

 

Use SSMS with a Different Windows Account – Back to Basics

One of the tasks I find myself doing on a fairly regular basis is running SSMS as a different Windows User. The two biggest use cases for this are: a) to test an account to prove that it is working (or not) and has the appropriate level of access, and b) to use SSMS to connect to a Domain SQL Server from a computer in a different domain (or not on the domain).

In addition to needing to do these tasks for myself, I find that I need to show somebody else how to do the same thing on a fairly consistent basis. Considering the finite keystrokes we all have (which I referenced here), it is time for me to “document” how to do this task.

I will cover two really easy and quick methods to perform this task. One from a command line and the other from the GUI. Both methods will involve a variation of the runas utility.

RUNAS

Let’s start with the easiest of the two methods. In this case, you will need to test windows account (let’s call it a domain account) from a computer which is on the same domain. This requirement allows us to take advantage of the shortcuts from within the GUI to access the runas utility.

To access the runas from Windows, one will first locate the icon for SSMS from the Start Menu, then right click that icon as shown here.

After right clicking the icon, you will see a menu pop up on the screen. Select “Run as different user” from that menu. Once you have selected the appropriate “run as” option, a login prompt will appear as shown here.

Enter the appropriate credentials at the prompt and then SSMS will launch. In this case, I may want to test the account myidomain\domain.useracc. So, I merely need to enter the domain credentials for that account. A caveat here is that the account you are testing will need to have the necessary permissions to “logon” to the workstation in order to launch the app – unlike the second method.

CMD Line

This second method has a few advantages over the GUI method with the biggest advantage being that you can use this method from any machine on the domain or even a machine not joined to the domain (so long as you have the ability to authenticate to the domain). And of course the additional advantage that the account you are testing does not require “logon” permissions on the machine you are using.

Let’s start with the basic command.

I can run that from a command line, or I can throw that into a desktop shortcut (the shortcut method is much more convenient). After I hit “enter” from the command line, I am prompted for a password for the account to be used for that session. Here’s an example of how that would look.

You won’t be able to see the password being typed (don’t fat finger the password 😉 ), but after you enter it successfully and press “enter” then you will see SSMS start to launch. After a successful SSMS launch, you should see something similar to the following:

I have a few things highlighted here of interest. First, in the red box, you will note that the user shown as connected to the server is my “local” test box account instead of the domain account. However, if I verify the authenticated account, I can see that the domain account is indeed accessing the SomeServer SQL Server (as demonstrated by the green box on the right).

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Runas should be a very common tool in the toolbox of all IT professionals – not just Data Professionals. Learning how to test different accounts is essential to being an effective and efficient professional that can provide solid results.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Single User Mode – Back to Basics

In a recent article, I took you on a trip through how to hack (ethically) a SQL Server to regain sysadmin access. In that article, I made quick mention of restarting SQL Server into single-user mode. It only makes sense to show quickly how to get into single-user mode.

Before getting into that, I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Single-User

So, what exactly is this single-user mode thing? Single-user mode is basically the official back-door into SQL Server for various reasons such as:

  • Somebody deleted all of the logins that were in the sysadmin role.
  • The sa account is disabled or the password has been forgotten.
  • Somebody deleted any Windows groups that were members of the sysadmin role.
  • All members of the sysadmin role are no longer with the company.
  • You need to restore the master database
  • You want to keep SQL Server all to yourself because you are greedy!

These are some pretty solid reasons to need to be able to use the back door. But how exactly do we get to the back door?

Two Paths

As luck would have it, there are two ways to enable single-user mode. You can either get there by making some changes for the SQL Server service in Configuration Manager, or you can utilize a command prompt. I won’t cover the gui path beyond the gentle reminder that you must remember to undo your change when using that method.

My preferred method is through the command line. Using my SQL Server 2017 as the experiment, I would navigate to the Binn directory for that instance. In this case, as shown in the next image.

Before getting too far ahead of myself, I am going to stop my SQL Server.

Notice, I also queried to find all of my services related to SQL before stopping the MSSQLServer service via the net stop mssqlserver command. We will come back to some net start and net stop commands later.

With the service successfully stopped, I can now restart the service in single-user mode.

And then the validation that we are indeed starting in single-user mode…

But wait, did you notice that bit of trickery on the startup command?

This is a pro-tip for when you must use single-user mode. Inevitably, somebody will steal the single-user connection and you will be locked out of the session. By using an app name after the single-user switch, you are telling SQL Server to only accept connections for that specific application. Since most apps will not be using sqlcmd, you will have far less contention to gain that connection and you will be able to complete your task much easier.

You could also pass something like this instead…

In this case, I would be limiting the connections to a query from SSMS (and not object explorer).

Now that I have a single-user connection, I can add a sysadmin or restore the master database or just sit on it and play devious. It all depends on what your objective for the single-user session happens to be.

More Command Line

Remember that reference to the NET commands? Well, it turns out we can also start SQL Server in single-user via net start. Let’s check it out.

The command is pretty simple:

The effect here is the same as navigating to the Binn directory and starting SQL Server with the sqlservr.exe executable. The big difference is considerably less typing and less verbose output of the service startup.

When using the net start method, you do need to know the service name of the SQL Server instance. To get that, I do recommend the following powershell script.

This will produce results similar to the following.

From the results, I can pick the SQL Server service and then pass that to the net start command fairly easily.

The Wrap

Starting SQL Server in single-user mode should be a tool every data professional holds in the bag. This is an essential tool that can be used in multiple scenarios and ensure you are able to fully maintain and control your server. I have shown how to get to single-user mode via two command line methods and mentioned a GUI method. The nice thing about the command line methods is that you don’t have to remember to undo the startup switch like you do with the GUI method.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Lost that SQL Server Access?

As a data professional can you recall the last time you needed to support a SQL Server instance for which you had no access? What if you used to have access and then that access magically disappeared?

I know I run into this dilemma more than I would probably like to. It is rather annoying to be under a crunch to rapidly provide support only to discover you are stuck and have to wait on somebody else who hopefully has access.

It’s one thing to not have access in the first place. This is usually an easy fix in most cases. The really unpleasant access issue is the one when you have confirmed prior access to the instance and then to be completely locked out. More succinctly, you have lost that SQL access!

Whoa is Me!

All hope is now lost right? OK, that isn’t entirely true. Or is it? What if everybody else from the team is also locked out and there is no known sysadmin account. In essence everybody is locked out from managing the instance and now you have a real crisis, right? Well, not so fast. You can still get back in to the instance with sysadmin access. It should be no real secret that you could always restart the SQL instance in single-user mode. Then again, that probably means bigger problems if the server is a production server and is still servicing application requests just fine.

What to do? What to do?

Restart Prohibited

If you really cannot cause a service disruption to bounce the server into single-user mode, my friend Argenis Fernandez (b | t) has this pretty nifty trick that could help you. Truth be told, I have tested that method (even on SQLExpress) several times and it is a real gem. Is this the only alternative?

Let’s back it up just a step or two first. Not having access to SQL Server is in no way the same thing as not having access to the server. Many sysadmins have access to the windows server. Many DBAs also have access to the Windows server or can at least work with the sysadmins to get access to the Windows server in cases like this. If you have admin access to windows – then not much is really going to stop you from gaining access to SQL on that same box. It is a matter of how you approach the issue. Even to restart SQL Server in single-user mode, you need to have access to the Windows server. So, please keep that in mind as you read the article by Argenis as well as the following.

Beyond the requirement of having local access to the server, one of the things that may cause heartburn for some is the method of editing the registry as suggested by Argenis. Modifying the registry (in this case) is not actually terribly complex but it is another one of those changes  that must be put back the way it was. What if there was another way?

As luck would have it, there is an alternative (else there wouldn’t be this article). It just so happens, this alternative is slightly less involved (in my opinion). Let’s start with a server where I don’t have SQL access (beyond public) but I do have Windows access.

We can see on this SQLExpress instance on the TF server that my “Jason” does not exist. Since I don’t have access, I can’t add my own account either. Time to fix that. In order to fix it, I am going to create Scheduled task in Windows that will run a SQLCMD script from my C:\Database folder. The folder can be anywhere, but I generally have one with scripts and such somewhere on each server that I can quickly access.

From here, you will want to click on the “Change User or Group” button to change it to an account that does have access to SQL Server. The account that I use is not a “user” account but rather it is a “system” account called “NT AUTHORITY\SYSTEM” that is present all the way through SQL Server 2017.

To locate the “NT AUTHORITY\SYSTEM” account, just type “SYSTEM” into the new window and click “Check Names”. The account will resolve and then you can click OK out of the “Select User or Group” window.

With the account selected that will run this task, we can now focus our attention on the guts of the task. We will now go to the “Actions” tab.

Click the new button, and here we will configure what will be done.

I do recommend putting the full path to SQLCMD into “Program/Script” box. Once entered, you will add the following to the parameter box.

If you do not have an instance, then just the server name will suffice after the -S parameter. The -i parameter specifies the path to the SQL script file that will be created and placed in the C:\database directory (or whichever directory you have chosen).

That is it for the setup of the task. Now let’s look at the guts of the script file.

Save that into a script document named myscript.sql in the aforementioned directory and then execute the windows task. After executing the Windows task, it is time to verify if it worked or not.

Boom! From no access to a sysadmin in a matter of seconds. Here is that quick verify script – generalized.

The Wrap

Losing access to a SQL instance is never a desirable situation – for the DBA. When the people that are supposed to have access, lose that access, all hope is not lost. There are plenty of methods available to regain the requisite access to manage the server. Today, I shared one such method that I view as being extremely easy. If you lose access, I would recommend taking the steps shown in this article to regain that access.

While not in the back to basics series, I do recommend checking out my other posts in that series. Some topics in the series include (but are not limited to): Backups, backup history and user logins. I would also recommend reading this audit article. If you are able to elevate your permissions, then obviously anybody with server access can elevate their permissions too. For that reason, you should regularly audit the permissions and principals in SQL Server.

T-SQL Tuesday #102: Giving Back

Comments: 1 Comment
Published on: May 8, 2018

bleeding heartLast month we had the opportunity to discuss some of the most important tools for a data professional. I took that opportunity to discuss how it is important to blog. As it turns out, that article correlates fairly strongly to today’s article.

These are maybe some of the questions that Riley Major (b | t) would like for us to examine about our own deep dark secrets and psychological makeup:

  • Why do we give back?
  • How do we help give back?
  • What do we plan to do to give back?

in this, the 102nd, installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

“Now I will give you an opportunity to give back. Everyone reading this has benefited from their fellow data professionals. And that benefit puts you in a position to share alike. You’ve learned something, so you can teach. You’ve been supported, so you can help. You’ve been led, so you can lead. But you don’t have to do it alone. We’re all going to do it together.

So here is my call. Pick some way you can help our community. “

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Reality Check

Very much related to my blog post about blogging, I have to echo the sentiment about how “Blogging helps you become a better technical person.” A lot of what I do for my blog is there to help the community, but it has a self-serving purpose. It helps me become a better technical person. It also helps me to improve my communications and writing skills.

There are some side effects of blogging as well. Each of us has a finite number of keystrokes in our lifetime. That said, it makes sense to write certain technical things down in a blog post rather than retyping the same information over and over for various different email or forum responses. Make sense? If nothing else, it just seems more efficient to write a long technical explanation once rather than 12 times.

So there we have a couple of self-serving reasons to blog. Those same self-serving reasons also frequently apply to being involved in the community. For example, the more you exert yourself to help answer forum questions, the more you learn. You become a more experienced technical person. In addition, you learn how to communicate better and write better (hopefully). You are practicing your craft in a public forum where people can easily shred you (and they often do), when you are wrong – even minutely wrong. This potential for being blasted in the forums typically makes one work harder at getting everything just about perfect.

If you opt to speak in front of technical people, guess what? You are doing the same things I just wrote about in regards to forum responses as well as with blogging. The big difference is that you are now doing it in person, live, on stage, and verbally! You have really put yourself out there in a big way to go speaking in front of people. You will likely double down even more with regards to ensuring your material is very near perfect and bullet proof. In addition, you will probably practice a few (hundred) times to make sure you don’t fumble with your words. What does this mean? You are becoming a more solid technical person and honing your communication skills. Again, very self serving!

Or is it? The one final aspect of being a community visible person is the drive behind what you do. I like to share what I learn. I also like to share my time. I believe in serving others with a charitable demeanor. Giving of yourself will always enhance your life more than you can imagine – when you do it with the attitude of putting others first. There is no selfish intent to those that really want to help the community.

It doesn’t matter if you are helping the sqlfamily, your local Scouting organization, boys and girls clubs, sports teams, or volunteering at the local schools etc; if you are doing it with the intent to serve and do good – you will enhance your life in some way. If you are doing it for some accolade or truly self-serving reason, you may get the accolade but you will find yourself stunted in the growth potential.

People that give of themselves freely is such an awesome characteristic. There are many in the SQL community that truly give of themselves freely – like SQLSoldier. When it is a part of your identity, it comes naturally and there isn’t a lot that needs to be done to plan for it. Sometimes, maybe it would be nice to be able to have more time to be able to do more – sure. And that is the beauty of this characteristic. If you are giving of yourself freely, you often find that you want to give more. That is great! Do what you can, when you can. Sometimes, it will be more. Sometimes, it will be less. It is all good as long as the heart is in the right place.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Change SQL Server Collation – Back to Basics

One of my most favorite things in the world is the opportunity to deal with extremely varying database and environment requirements. Many vendors and databases seem to have a wide swath of different requirements. Some of the reasons for these requirements are absurd and some are not. That is a discussion for a different day.

When dealing with vendors, sometimes you get good documentation and requirements for the app. If you happen across one of these opportunities, you should consider buying a lottery ticket. Most of the time, the requirements and documentation are poorly assembled and then suffer from linguistic shortcomings.

What do you do when you run into poor documentation from a vendor? The typical answer would be to either call them or make a best guess (even if you call them, you are likely stuck with a best guess anyway). Then what do you do when you find that your best guess was completely wrong? Now it is time to back pedal and fix it, right?

When that mistake involves the server collation setting, the solution is simple – right? All you need to do is uninstall and reinstall SQL Server. That is the common solution and is frankly a horrific waste of time. This article will show some basics around fixing that problem quickly without a full reinstall.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Reinstall Prohibited

I am not a huge fan of wasting time doing something, especially if there is a more efficient way of achieving the same end result. I am not talking about cutting corners. If you cut corners, you likely just end up with having more work to do to fix the problems your sloppiness will have caused. That to me is not the same end result.

Having run into a bit of a problem with a vendor recently (with lacking requirements), I found myself with a server that was installed with the wrong collation instead of what the vendor wanted (never-mind they said nothing of it until a month after the server was setup and ready for them to use). The vendor needed the collation fixed immediately (basically it needed to be fixed yesterday). I really did not want to do a reinstall of the server and the sysadmins were just about to click through the uninstall and redo the install.

Oy Vey! Everybody hold up just a second here! First things first – verify with certainty there is good reason to need to change the server collation. It is perfectly legit to give the vendor the third degree here. Make sure they understand why they need the change. If they can answer the questions satisfactorily, then proceed with the change.

Next, just because the vendor says you have to uninstall/reinstall (or reboot) the server to make a certain change, does not mean they know what they are talking about. I have run into too many cases where the vendor thinks you must reboot the server to change the max memory setting in SQL Server (not true for sure).

Sure, common myth would say that you must reinstall SQL Server in order to change the default server collation. That is not entirely accurate. Reinstall is just one option that exists.

In the case of this vendor, they required that the SQL_Latin1_General_CP850_CS_AS collation be used. The server was set for SQL_Latin1_General_CP1_CI_AS. So, let’s see how we can change the collation without a reinstall.

The first thing to do is to confirm the collation we have set.

We can see from these results that indeed the collation is wrong and we need to change it in order to comply with the request from the vendor. Next we will need to stop the SQL Server services.

I think that is pretty clear there what to do. As a reminder, the preferred method to stop and start SQL Server services is via the SQL Server Configuration Manager. We won’t do every start/stop from here for this article for good reason.

Once the services are stopped, then we need to open an administrative command prompt and navigate to the SQL Server binn directory as shown here.

This is for a default instance on SQL Server 2017. If you have a named instance or a different version of SQL Server, you will need to navigate the instance folder structure for your instance.

Next is where the magic happens. We enter a command similar to this:

Here is a quick summary of those flags in this command:

[-m] single user admin mode
[-T] trace flag turned on at startup
[-q] new collation to be applied

There are more such as -s available in books online for your perusal.

If you are curious what is up with those Trace Flags, pretty simple. TF4022 is to bypass startup procs. TF3659 on the other hand is supposed to write errors to the error log (at least in theory).

When the script starts, you will see something like the next two screens:

In the first, you can see that it says it is attempting to change the collation. In the second, just before the completion message, it states that the default collation was successfully changed. Let’s close this command prompt window and then go start SQL Server and validate the change.

And that is a successful change. See how easy that is? This effort takes all of maybe 5 minutes to complete (validation, starting, stopping and so on). I would take this over a reinstall on most days.

Now that we have changed the collation, all I need to do is repeat the process to set the collation back to what it was originally (in my test lab) and make sure to bookmark the process so I can easily look it up the next time.

There is a bit of a caveat to this. On each change of the collation, I ran into permissions issues with my default logging directory (where the sql error logs are written). I just needed to reapply the permissions and it was fine after that (SQL Server would not start). That said, the permissions issue was not seen on the box related to the change for the vendor. So just be mindful of the permissions just in case.

The Wrap

Every now and again we have to deal with a sudden requirements change. When that happens, we sometimes just need to take a step back and evaluate the entire situation to ensure we are proceeding down the right path. It is better to be pensive about the course of action rather than to knee jerk into the course of action. Do you want to spend 5 minutes on the solution or 30-40 minutes doing the same thing? Changing collation can be an easy change, or it can be one met with a bit of pain doing a reinstall (more painful if more user databases are present). Keep that in mind and keep cool when you need to make a sudden change.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

T-SQL Tuesday #101: Essential Tools

Comments: No Comments
Published on: April 10, 2018

What are the tools you love to use? What are the tools that maybe need a little sharpening? What tools do you have that maybe you wish were the same caliber as somebody else’s tools? And lastly, which tools do you not possess that you wish you possessed?

This are maybe some of the questions that Jens Vestergaard (b | t) would like for us to examine as we take an introspective look into our tool belts this month for the 101st installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

“Besides SQL Server Management Studio and Visual Studio Data Tools we all have our own set of tools that we use for everyday chores and tasks. But how do we get to know which tools are out there, if not for other professionals telling us about them? Does it have to a fully fledged with certification and all? Certainly not! If there’s some github project out there, that is helping you be double as productive, let us know about it. You can even boast about something you’ve built yourself – if you think others will benefit from using it.”

That is the invite. I am going to not adhere very closely to that invite. You see, I take a rather open ended definition to the meaning of “tools” this time around. You see, there are many general tools that just about everybody will use. But what about the tools that help to refine your abilities as a DBA?

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter)for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Refinery

There are two tools I would like to throw out there as essential to the DBA tool belt. These are not your traditional tools by any means but they are easily some of the most essential tools you could employ in your trade craft. These tools are Google-FU and blogging.

Yes, I am taking a liberal definition to the term “tools” and I warned you of that already. These are seriously some of the most important tools anybody could acquire. These are the tools that allow you to sharpen your skills and become an overall better professional. Let’s start with google-fu.

Google-fu is the ability to employ internet searches to find answers to your current questions as well as your current problems. 15 years ago, this skill was much more difficult to acquire and frankly quite a bit more important. In the present world, algorithms are running in the background and profiling you to help you find the answer you are looking for a little more easily. You do more searches, Google learns you better and you get better more accurate results over time. This is a good thing. Every data professional should be able to employ a good Google search to find the appropriate answers to their current problems.

Blogging on the other hand also gets easier over time but for different reasons. Where Google has evolved to help you improve your google-fu, the only way for you to improve your blogging ability is through more and more practice.

Why is blogging so important? Blogging is not there just to help you become a better writer. That is a nice benefit because every data professional needs to be able to write to some extent depending on business needs, requirements, documentation etc. Blogging helps you become a better technical person.

What is often overlooked about blogging is that it requires the writer (if they truly care), to research, practice, and test what it is they happen to be writing about. Why do people do this when blogging? Well, the truth is simple. You are putting a piece of yourself out there for public consumption and people will nitpick it. You will want to be as accurate as possible with whatever you put out there for the world to see. This also becomes a bit of your resume and future employers may see it. You will want them to see your value and not something littered with mistakes.

Over time, your writing will also tend to serve as a personal knowledge base. How cool is that? You will forget the fixes for things over time. You will forget some of the cool solutions over time. That is natural. If you have it written somewhere, you will be able to find it and use it again and again.

 

TSQL2sDay150x150The Wrap

These are a couple of the tools that I highly recommend for all data professionals. Sure they are non-traditional tools, but that does not diminish their importance. I recommend you try to polish these particular tools as frequently as plausible.

Oh, and if you are interested in some other SQL Server specific tools, read this series I have published.

Syspolicy Phantom Health Records

SQL Server comes with a default SQL agent job installed (for most installations) to help manage the collection of system health data. I would dare say this job is ignored by most people and few probably even know it exists.

This topic is not new to me. You may recall a previous article I wrote entailing one type of failure and how to resolve that failure. That article can be found here.

I recently ran into a variant of the problem outline in that previous article that requires just a bit of a different approach. The errors turn out to be similar on the surface but really are different upon closer inspection.

Phantom Health Records

If you are unfamiliar with the topic, I recommend reading the previous article. Then after reading that article, maybe brush up a little bit on the SQL Agent. The failures we will be looking at are within the SQL Agent and come from the job called: syspolicy_purge_history.

For this latest bout of failure, I will start basically where I left off in the the last article. The job fails on a server and I have access to other servers of the same SQL version where the job works.

Before diving any further into this problem, let’s look at what the error is:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘set-executionpolicy RemoteSigned -scope process -Force’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘Security error. ‘. Process Exit Code -1. The step failed.

Having the error in hand, and knowing that the job works elsewhere, my next logical step (again based on experience from the last article with this job) is to script the job from another server and use it to replace the job on the server where it fails. In principle this is an AWESOME idea.

 

Sadly, that idea was met with initial failure. As it turns out, the error remained exactly the same. This is good and unfortunate at the same time. Good in that I was able to confirm that the job was correctly configured with the following script in the job:

Since the step fails from SQL Server let’s see what else we can do to make it run. Let’s take that code and try it from a powershell ise. So, for giggles, let’s cram that script into powershell and see what blows up!

Now isn’t that a charming result! This result should be somewhat expected since the code I just threw into the ISE is not entirely powershell. If you look closer at the code, you will notice that it is using sqlcmd like conventions to execute a parameterized powershell script. Now, that makes perfect sense, right? So let’s clean it up to look like a standard PoSH script. We need to replace some parameters and then try again.

This will result in the following (resume reading after you scratch your head for a moment):

The key in this failure happens to be in the sqlserver. PoSH thinks we are trying to pass a drive letter when we are just trying to access the SQLServer stuff. Depending on your version of server, SQL Server, and PoSH you may need to do one of a couple different things. For this particular client/issue, this is what I had to try to get the script to work.

If you read the previous article, you may notice this command looks very much like the command that was causing the problems detailed in that previous article. Yes, we have just concluded our 180 return to where we started a few years back. Suffice it to say, this is expected to be a temporary fix until we are able to update the system to PoSH 5 and are able to install the updated sqlserver module.

As is, this script is not quite enough to make the job succeed now. To finish that off, I created a ps1 file to house this script. Then from a new step (defined as a sqlcmd step type) in the syspolicy purge job, I execute that powershell script as follows:

Tada, nuisance job failure alert is resolved and the system is functioning again.

Conclusion

I dare say the quickest resolution to this job is to probably just disable it. I have seen numerous servers with this job disabled entirely for the simple reason that it fails frequently and just creates noise alerts when it fails. Too many fixes abound for this particular job and too few resolve the failures permanently.

I would generally err on the side of fixing the job. Worst case, you learn 1000 ways of what not to do to fix it. 😉

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

«page 1 of 64

Calendar
October 2018
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Monday, October 15, 2018