Lost that SQL Server Access?

As a data professional can you recall the last time you needed to support a SQL Server instance for which you had no access? What if you used to have access and then that access magically disappeared?

I know I run into this dilemma more than I would probably like to. It is rather annoying to be under a crunch to rapidly provide support only to discover you are stuck and have to wait on somebody else who hopefully has access.

It’s one thing to not have access in the first place. This is usually an easy fix in most cases. The really unpleasant access issue is the one when you have confirmed prior access to the instance and then to be completely locked out. More succinctly, you have lost that SQL access!

Whoa is Me!

All hope is now lost right? OK, that isn’t entirely true. Or is it? What if everybody else from the team is also locked out and there is no known sysadmin account. In essence everybody is locked out from managing the instance and now you have a real crisis, right? Well, not so fast. You can still get back in to the instance with sysadmin access. It should be no real secret that you could always restart the SQL instance in single-user mode. Then again, that probably means bigger problems if the server is a production server and is still servicing application requests just fine.

What to do? What to do?

Restart Prohibited

If you really cannot cause a service disruption to bounce the server into single-user mode, my friend Argenis Fernandez (b | t) has this pretty nifty trick that could help you. Truth be told, I have tested that method (even on SQLExpress) several times and it is a real gem. Is this the only alternative?

Let’s back it up just a step or two first. Not having access to SQL Server is in no way the same thing as not having access to the server. Many sysadmins have access to the windows server. Many DBAs also have access to the Windows server or can at least work with the sysadmins to get access to the Windows server in cases like this. If you have admin access to windows – then not much is really going to stop you from gaining access to SQL on that same box. It is a matter of how you approach the issue. Even to restart SQL Server in single-user mode, you need to have access to the Windows server. So, please keep that in mind as you read the article by Argenis as well as the following.

Beyond the requirement of having local access to the server, one of the things that may cause heartburn for some is the method of editing the registry as suggested by Argenis. Modifying the registry (in this case) is not actually terribly complex but it is another one of those changes  that must be put back the way it was. What if there was another way?

As luck would have it, there is an alternative (else there wouldn’t be this article). It just so happens, this alternative is slightly less involved (in my opinion). Let’s start with a server where I don’t have SQL access (beyond public) but I do have Windows access.

We can see on this SQLExpress instance on the TF server that my “Jason” does not exist. Since I don’t have access, I can’t add my own account either. Time to fix that. In order to fix it, I am going to create Scheduled task in Windows that will run a SQLCMD script from my C:\Database folder. The folder can be anywhere, but I generally have one with scripts and such somewhere on each server that I can quickly access.

From here, you will want to click on the “Change User or Group” button to change it to an account that does have access to SQL Server. The account that I use is not a “user” account but rather it is a “system” account called “NT AUTHORITY\SYSTEM” that is present all the way through SQL Server 2017.

To locate the “NT AUTHORITY\SYSTEM” account, just type “SYSTEM” into the new window and click “Check Names”. The account will resolve and then you can click OK out of the “Select User or Group” window.

With the account selected that will run this task, we can now focus our attention on the guts of the task. We will now go to the “Actions” tab.

Click the new button, and here we will configure what will be done.

I do recommend putting the full path to SQLCMD into “Program/Script” box. Once entered, you will add the following to the parameter box.

If you do not have an instance, then just the server name will suffice after the -S parameter. The -i parameter specifies the path to the SQL script file that will be created and placed in the C:\database directory (or whichever directory you have chosen).

That is it for the setup of the task. Now let’s look at the guts of the script file.

Save that into a script document named myscript.sql in the aforementioned directory and then execute the windows task. After executing the Windows task, it is time to verify if it worked or not.

Boom! From no access to a sysadmin in a matter of seconds. Here is that quick verify script – generalized.

The Wrap

Losing access to a SQL instance is never a desirable situation – for the DBA. When the people that are supposed to have access, lose that access, all hope is not lost. There are plenty of methods available to regain the requisite access to manage the server. Today, I shared one such method that I view as being extremely easy. If you lose access, I would recommend taking the steps shown in this article to regain that access.

While not in the back to basics series, I do recommend checking out my other posts in that series. Some topics in the series include (but are not limited to): Backups, backup history and user logins. I would also recommend reading this audit article. If you are able to elevate your permissions, then obviously anybody with server access can elevate their permissions too. For that reason, you should regularly audit the permissions and principals in SQL Server.

T-SQL Tuesday #102: Giving Back

Comments: 1 Comment
Published on: May 8, 2018

bleeding heartLast month we had the opportunity to discuss some of the most important tools for a data professional. I took that opportunity to discuss how it is important to blog. As it turns out, that article correlates fairly strongly to today’s article.

These are maybe some of the questions that Riley Major (b | t) would like for us to examine about our own deep dark secrets and psychological makeup:

  • Why do we give back?
  • How do we help give back?
  • What do we plan to do to give back?

in this, the 102nd, installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

“Now I will give you an opportunity to give back. Everyone reading this has benefited from their fellow data professionals. And that benefit puts you in a position to share alike. You’ve learned something, so you can teach. You’ve been supported, so you can help. You’ve been led, so you can lead. But you don’t have to do it alone. We’re all going to do it together.

So here is my call. Pick some way you can help our community. “

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Reality Check

Very much related to my blog post about blogging, I have to echo the sentiment about how “Blogging helps you become a better technical person.” A lot of what I do for my blog is there to help the community, but it has a self-serving purpose. It helps me become a better technical person. It also helps me to improve my communications and writing skills.

There are some side effects of blogging as well. Each of us has a finite number of keystrokes in our lifetime. That said, it makes sense to write certain technical things down in a blog post rather than retyping the same information over and over for various different email or forum responses. Make sense? If nothing else, it just seems more efficient to write a long technical explanation once rather than 12 times.

So there we have a couple of self-serving reasons to blog. Those same self-serving reasons also frequently apply to being involved in the community. For example, the more you exert yourself to help answer forum questions, the more you learn. You become a more experienced technical person. In addition, you learn how to communicate better and write better (hopefully). You are practicing your craft in a public forum where people can easily shred you (and they often do), when you are wrong – even minutely wrong. This potential for being blasted in the forums typically makes one work harder at getting everything just about perfect.

If you opt to speak in front of technical people, guess what? You are doing the same things I just wrote about in regards to forum responses as well as with blogging. The big difference is that you are now doing it in person, live, on stage, and verbally! You have really put yourself out there in a big way to go speaking in front of people. You will likely double down even more with regards to ensuring your material is very near perfect and bullet proof. In addition, you will probably practice a few (hundred) times to make sure you don’t fumble with your words. What does this mean? You are becoming a more solid technical person and honing your communication skills. Again, very self serving!

Or is it? The one final aspect of being a community visible person is the drive behind what you do. I like to share what I learn. I also like to share my time. I believe in serving others with a charitable demeanor. Giving of yourself will always enhance your life more than you can imagine – when you do it with the attitude of putting others first. There is no selfish intent to those that really want to help the community.

It doesn’t matter if you are helping the sqlfamily, your local Scouting organization, boys and girls clubs, sports teams, or volunteering at the local schools etc; if you are doing it with the intent to serve and do good – you will enhance your life in some way. If you are doing it for some accolade or truly self-serving reason, you may get the accolade but you will find yourself stunted in the growth potential.

People that give of themselves freely is such an awesome characteristic. There are many in the SQL community that truly give of themselves freely – like SQLSoldier. When it is a part of your identity, it comes naturally and there isn’t a lot that needs to be done to plan for it. Sometimes, maybe it would be nice to be able to have more time to be able to do more – sure. And that is the beauty of this characteristic. If you are giving of yourself freely, you often find that you want to give more. That is great! Do what you can, when you can. Sometimes, it will be more. Sometimes, it will be less. It is all good as long as the heart is in the right place.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Change SQL Server Collation – Back to Basics

One of my most favorite things in the world is the opportunity to deal with extremely varying database and environment requirements. Many vendors and databases seem to have a wide swath of different requirements. Some of the reasons for these requirements are absurd and some are not. That is a discussion for a different day.

When dealing with vendors, sometimes you get good documentation and requirements for the app. If you happen across one of these opportunities, you should consider buying a lottery ticket. Most of the time, the requirements and documentation are poorly assembled and then suffer from linguistic shortcomings.

What do you do when you run into poor documentation from a vendor? The typical answer would be to either call them or make a best guess (even if you call them, you are likely stuck with a best guess anyway). Then what do you do when you find that your best guess was completely wrong? Now it is time to back pedal and fix it, right?

When that mistake involves the server collation setting, the solution is simple – right? All you need to do is uninstall and reinstall SQL Server. That is the common solution and is frankly a horrific waste of time. This article will show some basics around fixing that problem quickly without a full reinstall.

I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Reinstall Prohibited

I am not a huge fan of wasting time doing something, especially if there is a more efficient way of achieving the same end result. I am not talking about cutting corners. If you cut corners, you likely just end up with having more work to do to fix the problems your sloppiness will have caused. That to me is not the same end result.

Having run into a bit of a problem with a vendor recently (with lacking requirements), I found myself with a server that was installed with the wrong collation instead of what the vendor wanted (never-mind they said nothing of it until a month after the server was setup and ready for them to use). The vendor needed the collation fixed immediately (basically it needed to be fixed yesterday). I really did not want to do a reinstall of the server and the sysadmins were just about to click through the uninstall and redo the install.

Oy Vey! Everybody hold up just a second here! First things first – verify with certainty there is good reason to need to change the server collation. It is perfectly legit to give the vendor the third degree here. Make sure they understand why they need the change. If they can answer the questions satisfactorily, then proceed with the change.

Next, just because the vendor says you have to uninstall/reinstall (or reboot) the server to make a certain change, does not mean they know what they are talking about. I have run into too many cases where the vendor thinks you must reboot the server to change the max memory setting in SQL Server (not true for sure).

Sure, common myth would say that you must reinstall SQL Server in order to change the default server collation. That is not entirely accurate. Reinstall is just one option that exists.

In the case of this vendor, they required that the SQL_Latin1_General_CP850_CS_AS collation be used. The server was set for SQL_Latin1_General_CP1_CI_AS. So, let’s see how we can change the collation without a reinstall.

The first thing to do is to confirm the collation we have set.

We can see from these results that indeed the collation is wrong and we need to change it in order to comply with the request from the vendor. Next we will need to stop the SQL Server services.

I think that is pretty clear there what to do. As a reminder, the preferred method to stop and start SQL Server services is via the SQL Server Configuration Manager. We won’t do every start/stop from here for this article for good reason.

Once the services are stopped, then we need to open an administrative command prompt and navigate to the SQL Server binn directory as shown here.

This is for a default instance on SQL Server 2017. If you have a named instance or a different version of SQL Server, you will need to navigate the instance folder structure for your instance.

Next is where the magic happens. We enter a command similar to this:

Here is a quick summary of those flags in this command:

[-m] single user admin mode
[-T] trace flag turned on at startup
[-q] new collation to be applied

There are more such as -s available in books online for your perusal.

If you are curious what is up with those Trace Flags, pretty simple. TF4022 is to bypass startup procs. TF3659 on the other hand is supposed to write errors to the error log (at least in theory).

When the script starts, you will see something like the next two screens:

In the first, you can see that it says it is attempting to change the collation. In the second, just before the completion message, it states that the default collation was successfully changed. Let’s close this command prompt window and then go start SQL Server and validate the change.

And that is a successful change. See how easy that is? This effort takes all of maybe 5 minutes to complete (validation, starting, stopping and so on). I would take this over a reinstall on most days.

Now that we have changed the collation, all I need to do is repeat the process to set the collation back to what it was originally (in my test lab) and make sure to bookmark the process so I can easily look it up the next time.

There is a bit of a caveat to this. On each change of the collation, I ran into permissions issues with my default logging directory (where the sql error logs are written). I just needed to reapply the permissions and it was fine after that (SQL Server would not start). That said, the permissions issue was not seen on the box related to the change for the vendor. So just be mindful of the permissions just in case.

The Wrap

Every now and again we have to deal with a sudden requirements change. When that happens, we sometimes just need to take a step back and evaluate the entire situation to ensure we are proceeding down the right path. It is better to be pensive about the course of action rather than to knee jerk into the course of action. Do you want to spend 5 minutes on the solution or 30-40 minutes doing the same thing? Changing collation can be an easy change, or it can be one met with a bit of pain doing a reinstall (more painful if more user databases are present). Keep that in mind and keep cool when you need to make a sudden change.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

T-SQL Tuesday #101: Essential Tools

Comments: No Comments
Published on: April 10, 2018

What are the tools you love to use? What are the tools that maybe need a little sharpening? What tools do you have that maybe you wish were the same caliber as somebody else’s tools? And lastly, which tools do you not possess that you wish you possessed?

This are maybe some of the questions that Jens Vestergaard (b | t) would like for us to examine as we take an introspective look into our tool belts this month for the 101st installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

“Besides SQL Server Management Studio and Visual Studio Data Tools we all have our own set of tools that we use for everyday chores and tasks. But how do we get to know which tools are out there, if not for other professionals telling us about them? Does it have to a fully fledged with certification and all? Certainly not! If there’s some github project out there, that is helping you be double as productive, let us know about it. You can even boast about something you’ve built yourself – if you think others will benefit from using it.”

That is the invite. I am going to not adhere very closely to that invite. You see, I take a rather open ended definition to the meaning of “tools” this time around. You see, there are many general tools that just about everybody will use. But what about the tools that help to refine your abilities as a DBA?

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter)for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Refinery

There are two tools I would like to throw out there as essential to the DBA tool belt. These are not your traditional tools by any means but they are easily some of the most essential tools you could employ in your trade craft. These tools are Google-FU and blogging.

Yes, I am taking a liberal definition to the term “tools” and I warned you of that already. These are seriously some of the most important tools anybody could acquire. These are the tools that allow you to sharpen your skills and become an overall better professional. Let’s start with google-fu.

Google-fu is the ability to employ internet searches to find answers to your current questions as well as your current problems. 15 years ago, this skill was much more difficult to acquire and frankly quite a bit more important. In the present world, algorithms are running in the background and profiling you to help you find the answer you are looking for a little more easily. You do more searches, Google learns you better and you get better more accurate results over time. This is a good thing. Every data professional should be able to employ a good Google search to find the appropriate answers to their current problems.

Blogging on the other hand also gets easier over time but for different reasons. Where Google has evolved to help you improve your google-fu, the only way for you to improve your blogging ability is through more and more practice.

Why is blogging so important? Blogging is not there just to help you become a better writer. That is a nice benefit because every data professional needs to be able to write to some extent depending on business needs, requirements, documentation etc. Blogging helps you become a better technical person.

What is often overlooked about blogging is that it requires the writer (if they truly care), to research, practice, and test what it is they happen to be writing about. Why do people do this when blogging? Well, the truth is simple. You are putting a piece of yourself out there for public consumption and people will nitpick it. You will want to be as accurate as possible with whatever you put out there for the world to see. This also becomes a bit of your resume and future employers may see it. You will want them to see your value and not something littered with mistakes.

Over time, your writing will also tend to serve as a personal knowledge base. How cool is that? You will forget the fixes for things over time. You will forget some of the cool solutions over time. That is natural. If you have it written somewhere, you will be able to find it and use it again and again.

 

TSQL2sDay150x150The Wrap

These are a couple of the tools that I highly recommend for all data professionals. Sure they are non-traditional tools, but that does not diminish their importance. I recommend you try to polish these particular tools as frequently as plausible.

Oh, and if you are interested in some other SQL Server specific tools, read this series I have published.

Syspolicy Phantom Health Records

SQL Server comes with a default SQL agent job installed (for most installations) to help manage the collection of system health data. I would dare say this job is ignored by most people and few probably even know it exists.

This topic is not new to me. You may recall a previous article I wrote entailing one type of failure and how to resolve that failure. That article can be found here.

I recently ran into a variant of the problem outline in that previous article that requires just a bit of a different approach. The errors turn out to be similar on the surface but really are different upon closer inspection.

Phantom Health Records

If you are unfamiliar with the topic, I recommend reading the previous article. Then after reading that article, maybe brush up a little bit on the SQL Agent. The failures we will be looking at are within the SQL Agent and come from the job called: syspolicy_purge_history.

For this latest bout of failure, I will start basically where I left off in the the last article. The job fails on a server and I have access to other servers of the same SQL version where the job works.

Before diving any further into this problem, let’s look at what the error is:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘set-executionpolicy RemoteSigned -scope process -Force’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘Security error. ‘. Process Exit Code -1. The step failed.

Having the error in hand, and knowing that the job works elsewhere, my next logical step (again based on experience from the last article with this job) is to script the job from another server and use it to replace the job on the server where it fails. In principle this is an AWESOME idea.

 

Sadly, that idea was met with initial failure. As it turns out, the error remained exactly the same. This is good and unfortunate at the same time. Good in that I was able to confirm that the job was correctly configured with the following script in the job:

Since the step fails from SQL Server let’s see what else we can do to make it run. Let’s take that code and try it from a powershell ise. So, for giggles, let’s cram that script into powershell and see what blows up!

Now isn’t that a charming result! This result should be somewhat expected since the code I just threw into the ISE is not entirely powershell. If you look closer at the code, you will notice that it is using sqlcmd like conventions to execute a parameterized powershell script. Now, that makes perfect sense, right? So let’s clean it up to look like a standard PoSH script. We need to replace some parameters and then try again.

This will result in the following (resume reading after you scratch your head for a moment):

The key in this failure happens to be in the sqlserver. PoSH thinks we are trying to pass a drive letter when we are just trying to access the SQLServer stuff. Depending on your version of server, SQL Server, and PoSH you may need to do one of a couple different things. For this particular client/issue, this is what I had to try to get the script to work.

If you read the previous article, you may notice this command looks very much like the command that was causing the problems detailed in that previous article. Yes, we have just concluded our 180 return to where we started a few years back. Suffice it to say, this is expected to be a temporary fix until we are able to update the system to PoSH 5 and are able to install the updated sqlserver module.

As is, this script is not quite enough to make the job succeed now. To finish that off, I created a ps1 file to house this script. Then from a new step (defined as a sqlcmd step type) in the syspolicy purge job, I execute that powershell script as follows:

Tada, nuisance job failure alert is resolved and the system is functioning again.

Conclusion

I dare say the quickest resolution to this job is to probably just disable it. I have seen numerous servers with this job disabled entirely for the simple reason that it fails frequently and just creates noise alerts when it fails. Too many fixes abound for this particular job and too few resolve the failures permanently.

I would generally err on the side of fixing the job. Worst case, you learn 1000 ways of what not to do to fix it. 😉

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

«page 2 of 118»

Calendar
October 2018
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031  

Welcome , today is Sunday, October 21, 2018