Coping with Self Doubt

TSQL Tuesday

Behold, the second Tuesday of January 2020 and that means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, the blog party is almost like a masquerade ball. John Shaulis (b | t) has implored us to take hold of our inner impostor. Okay okay, he hasn’t asked us to be impostors, but rather he has requested we talk about Impostor Syndrome.

At first glance, and seeing the word “impostor”, my thoughts turned outward. I thought John wanted us to out all of those Sr. DBA candidates out there who are soooo confident but don’t know the difference between a PK and FK or even the difference between a delete and truncate (as examples).

In reality, John is asking about something entirely different. Looking up “Impostor Syndrome” I see several definitions similar to the one that John posted. Here are a couple of examples.

Impostor syndrome can be defined as a collection of feelings of inadequacy that persist despite evident success. ‘Impostors’ suffer from chronic self-doubt and a sense of intellectual fraudulence that override any feelings of success or external proof of their competence. – Harvard Business Review

And…

Impostor syndrome is a psychological pattern in which one doubts one’s accomplishments and has a persistent internalized fear of being exposed as a “fraud”. – Wikipedia

These definitions reveal something very pertinent to the topic. The syndrome consists of the presence of persistent, nagging, chronic self-doubt. Even in the face of success, the feelings of inadequacy can’t be escaped. Well, this certainly adds light to the topic and made me ponder even further. You see, I think the disease is a much larger problem than having the occasional feeling of being inadequate. Persistent chronic feelings of inadequacy and Impostor Syndrome (as defined) would make me concerned about the presence of depression as well. This can be a very serious issue and shouldn’t be taken lightly. Seek help!

Well, What if…

What about the fact that so many of us do have feelings of inadequacy? The occasional feeling of not being good enough is not entirely a bad thing. (Again, the trick is figuring out where that line between chronic/persistent/nagging feelings ends and the occasional feelings is ok.)

There have been many times when I have personally had the feeling of inadequacy as a Data Professional. These thoughts often creep up whenever doing something new or different or even something uncommon (but practiced). I have even had these thoughts occasionally right before speaking. I have a close friend who has these thoughts just about every time before he speaks.

When these thoughts occasionally crop up, what is done next is the important thing. How does one cope? Does the coping mechanism work? Do you practice your speaking in front of a mirror for hundreds of hours? Maybe you have a few people proof read an article you are writing. Maybe, you build a proof of concept and then ask a couple friends to test it and try to break it to make sure it works.

Coping mechanisms are great methods to assist in the removal of self-doubt. Another good coping mechanism is to get some physical activity. Physical activity can help with the chronic self doubt and is also something therapists may recommend for depression (important because the symptoms for both Impostor Syndrome and Depression are so close).

Another component of Impostor Syndrome is the fear of being discovered as a fraud. Living in fear isn’t healthy. If there is the persistent unshakable feeling of fear about being discovered as a fraud, then it is time to find help and also start troubleshooting your personal health. Here is a good starting point to help troubleshoot your health in regards to this syndrome.

One of the best coping mechanisms for Impostor Syndrome is the use of learned behaviors. Some of the learned behaviors that are great for everybody are as follows:

  1. Learn to take your mistakes in stride. Mistakes are natural. Own the mistake. Mistakes can make us better at we do. The sooner we own a mistake, the sooner we can learn how to not make that same mistake again.
  2. Learn to accept internal validation. Not everybody needs a participation trophy. Internal validation is just as good as external validation. Also be accepting of constructive criticism.
  3. Set smaller attainable goals directed at specific skills or behaviors you wish to improve upon or learn.
  4. Learn to rely on others. Be able to reach out and ask for help.
  5. Instead of being an information hoarder, learn to disseminate knowledge. In addition, it is perfectly acceptable to admit you don’t know something and trust that you can learn it quickly or find somebody that already knows it.

Wrapping it Up

It is not abnormal to have the occasional feelings of inadequacy. In IT, and particularly among those who are successful, thoughts of being a fraud can occur. The important thing to do is recognize the thoughts and frequency. When the thoughts start to creep in, learn to replace it with a coping mechanism or an alternate behavior. This article provided some coping mechanisms, and some behaviors to try and learn.

In the case of experiencing chronic inescapable self-doubt and living in constant fear of being found out as a fraud, that is an entirely serious issue and professional help may be required.

Feel free to explore some of the other TSQL Tuesday posts I have written.

If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For some “back to basics” related articles, feel free to read here.

Creative Database Naming

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: January 2, 2020

Every now and again, we as DBAs need to get away from the daily grind of the hum drum tasks. We need to find a way to have a little fun (the kind that would be non-harmful and lacking in mal-intent).

Sometimes, this fun can be had through learning opportunities. Sometimes, we can have a little fun through diving deep into SQL Server in some way. At least I know that proves to be enlightening and enjoyable for myself. Sometimes, it is just fun for the sake of a good laugh. What if we could find an opportunity to have a little fun that incorporates learning and a laugh or two? BINGO!

Enter the opportunity to play with emojis, collations, and database (or object) names.

Fun with Characters

Let’s figure we have a requirement to create a database with sensitive data. Due to the sensitivity of the data, it is classified confidential (for your eyes only, don’t talk about it and plug your ears if somebody starts talking about it). This is so sensitive that an apt name for the database could be anything like 🙈 or 🙉 or 🙊. Being smart, you know there are two more databases coming down the line so you only want to pick one of those for the name and not all three (though all three could make sense for a single database name).

Being on top of your game, you prep for all three databases in advance. Here is the script to create the three new databases.

In executing the script, you receive the following results:

Msg 1801, Level 16, State 3, Line 15
Database ‘🙈’ already exists. Choose a different database name.
Msg 1801, Level 16, State 3, Line 17
Database ‘🙉’ already exists. Choose a different database name.
Msg 1801, Level 16, State 3, Line 19
Database ‘🙊’ already exists. Choose a different database name.

You look up and down through your SSMS window and even query sys.databases to ensure those databases are factually not present. You are able to confirm that none or currently present on the server yet the script continues to fail. How could this be? Clearly, the images are each very different.

From the database review, you know there is only one other database with an emoji style name – 👺. What could it be? Well, as it turns out, it is the collation that was chosen for the SQL Server install. The chosen collation doesn’t support a large enough range  of code points so most emojis end up getting mapped to the same thing when stored in SQL Server (topic for much larger article and covered very completely by Solomon Rutzky on his blog –  – here). The short of the issue is the use of supplementary characters and splitting the code point into 2 surrogate codes. For most emojis, this surrogate system results in the “high” surrogate key being the same. Let’s take a look at it.

Notice how each of the monkeys plus the devil mask all translate to the same code point when using the SQL_Latin1_General_CP1_CI_AS (default collation)? This is the root of the problem. If we change to a different server with a better collation (e.g. Latin1_General_100_CI_AS_SC) then we will see vastly different results.

First, let’s validate that the collation on this other instance truly is displaying the correct code points (and it is) instead of just the high surrogate key.

Then let’s go ahead and create each of those three high security databases (seenoevil, speaknoevil, and hearnoevil). Et voila! We now have each of those new databases created and they are indeed unique as we desired (as you can see in the preceding image).

If you want to learn all of the hard core nitty gritty behind this, I recommend reading the blog by Solomon Rutzky – here. Solomon knows his 💩 about the collations and can answer or figure out how to answer just about anything you could throw at him about the topic. His blog is a WEALTH of in-depth knowledge with all sorts of tests and proofs. Check it out!

Put a bow on it

Working as a DBA and creating databases doesn’t have to be the same old hum drum day in and day out. This article showed how to have a little fun while creating databases and database objects. At the same time, I showed how the collation of your server can impact some of your naming schemes.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the ninth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

 

*Post Mortem from Solomon who is the collation genius

And, that brings us to the next fun fact: being able to use more than 1 emoji for database names has nothing to do with whether or not SQL Server sees a surrogate pair as being a single Supplementary Character or merely two surrogate code points. Using SCA collations only affects the behavior of the built-in functions, even though the official documentation says that they help in sorting and comparison (the doc is wrong and I haven’t had time to submit a correction). Being able to name more than 1 database one or more emoji characters is entirely based on how the characters compare to each other, and that is a function of sort weights, which are managed separately. Sort weights do map to code points, but they can change in value depending on the locale/culture being used, as well as the sensitivities selected (i.e. case, accent, etc). The problem you run into with at least most of the version 80 collations (i.e. all Windows collations without a version number in their names, and all SQL Server collations) is simply that they never defined any sort weights for either supplementary code points, or even the surrogate code points used to create surrogate pairs. And with no sort weights defined, their sort value is always 0, and since they are all “0”, they all equate to not only each other, but to anything else with a sort weight of “0”, even an empty string.
Starting with the version 90 collations (even without the “_SC” flag in the name), sort weights were added to the surrogate code points (not to supplementary characters) so that they could at least be distinguished from each other in sorting and comparison operations. And this is why:
  1. you were not able to create the 3 monkey DBs using SQL_Latin1_General_CP1_CI_AS

  2. you were able to create the 3 monkey DBs using Latin1_General_100_CI_AS_SC

  3. you can create those 3 monkey DBs using SQL_Latin1_General_CP850_BIN2, even with that being a SQL Server collation, because binary collations don’t use sort weights, but instead just go by each byte or code unit (depending on BIN or BIN2, respectively)

Cannot Open Backup Device

Your success as a DBA is directly relational to your ability to ensure proper Database backups are occurring. There is a whole slew of considerations that go into effect when discussing what a proper Database backup actually is. To drill it down to its most basic definition, a proper database backup is one that fulfills business requirements for recovery, retention, and has been routinely tested to ensure the backup can be restored.

Remember, a backup that cannot be restored is factually not a backup. Why? Well, going back to the definition of a backup, it is a fail-safe mechanism to be implemented in the event of a failure. If it cannot be implemented (in this case restored), then it provides no value because it cannot be used.

What about when you run into a problem even getting to the point of getting a successful backup operation? For instance, this error can be a bit troubling and may just cause you to lose plenty of time troubleshooting it.

Msg 3201, Level 16, State 1, Line 3
Cannot open backup device ‘\\SomeServer\C$\MSSQL14\MSSQL\DATA\Full_Backup2.bak’. Operating system error 1326(The user name or password is incorrect.).
Msg 3013, Level 16, State 1, Line 3
BACKUP DATABASE is terminating abnormally.

Sure, the error seems to report plenty of adequate information, but you may be surprised at the pain this particular error can cause.

Backup Device Permissions

If you search long enough you will come across numerous articles or forum posts that all pretty much have the same direction. They all point to the problem being a permissions issue. There are a few concerns with the permissions stigma however. In a recent issue for a client, we could verify that permissions were indeed not at fault. To ensure the permissions were not at fault, we granted local admin in the OS as well as sysadmin within SQL Server for the service account. In addition, we also tried a proxy account with full blown access in Windows and in SQL.

To take it a step further, we also ensured that the special permissions for service accounts were also in effect. Those permissions are:

  1. Permission to bypass traverse checking (SeChangeNotifyPrivilege)
  2. Permission to replace a process-level token (SeAssignPrimaryTokenPrivilege)
  3. Permission to adjust memory quotas for a process (SeIncreaseQuotaPrivilege)
  4. Permission to access this computer from the network (SeNetworkLogonRight)

None of this seemed to have any effect when trying to perform a database backup in SQLServer. In our case, we also ensured that the service account had full blown access to the folders as well.

Following the traditional troubleshooting measures, I created a virtual backup device as well only to get the same error. I tried to use the UNC from a remote server and it worked perfectly fine for all accounts in question. What could be going on? Then, I figured why not try the UNC from the server itself. Finally, a clue! Suddenly I was getting prompted for my credentials but my credentials would not work no matter what. Just for giggles, I verified my account was not locked out, and indeed my account was working just fine.

As it turns out, sometimes the problem is not really a permissions issue but it is a veiled security issue. You see, if I use the actual server name instead of the CNAME in all cases above, suddenly everything worked. The issue turns out to be more along the lines of a Windows issue and is documented here (still valid on Windows 2016).

Here is the basic SQL backup test script that led us down the path to the CNAME being the issue.

Put a bow on it

When running into error 1326, it makes plenty of sense to try to create a backup dump device (only as a testing exercise) as well as test the connectivity to the UNC path from the local server instead of a remote server. In my case, the problem was the local server was blocking access to a CName version of the server.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the eighth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Negative Port Numbers

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: December 31, 2019

As a DBA, one of the more critical tasks is to confirm proper setup of a SQL Server. In order to confirm proper setup, I like to validate tcp/ip and port settings. I do this not just through the SQL Server Configuration Manager, but also via query through Management Studio.

Occasionally, when validating the port configurations through this method, you will encounter a fun little puzzle. If you are paying attention to the results, you just may see something that could cause you to scratch your noggin.

Of course, it all depends on the query method you choose to use to perform your validations. There are two easy access methods to use to query for the tcp/ip and port settings. Each may produce slightly different results. Let’s explore these methods.

What’s in a Port?

First let’s go with the easiest of the queries. We will query sys.dm_exec_connections with something that looks like the following.

Executing this query via sqlcmd on my server in question, I might see something like the following in the results.

As you can surmise from the image, the instance I am setting up and validating happens to have multiple IP addresses and a more complex setup because I have it listening on different ports depending on the IP address source. We will focus on the non-default IP address for the purposes of this article. In the local_tcp_port field, you can see that the ports are being reported as expected. And for the sake of simplicity, these ports are correct.

Let’s now divert our attention to the alternative option – CONNECTIONPROPERTY().

In the results window, I see something like this:

Take quick note of the port number I have circled in red. This doesn’t match the original query at all. In fact, it doesn’t come anywhere close to the actual port number. In addition, the port number shown here is a negative value. Obviously a negative port is not correct as TCP/IP ports only range from 0-65535. So what is happening here? Let’s change this query just a little bit and combine the two sources.

The results of this query give me the following.

OK, cool. So we can see that some sort of masking has been created to conceal the port number when it is a “dynamic” port. What is the significance of 65536 though? Well, it just so happens that 16 bits is 65536. This would give us port values of 0-65535 with port 0 being reserved and unusable. Thus, we simply do a little math to figure out what the actual port is by adding 65536 to the negative port value. That is shown in the following code snip.

When calculating the value between sys.dm_exec_connections and ConnectionProperty(), ensure you perform a conversion on the value from ConnectionProperty(). Despite documentation showing that it is an integer value on the port, it is not. The data type for the port value from ConnectionProperty() is actual sql_variant and an implicit conversion won’t work there.

Put a bow on it

Validating your server setup is an integral component of your duties as a SQL Server DBA. When performing those validations, it is possible to run into an intriguing difference in reported port values. That difference of value is easily rectified if you understand that the ConnectionProperty function is doing a bit of a port mask by subtracting 65536 from the actual port number value.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the seventh article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Get a List of Files with Data

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: December 30, 2019

Suppose you have a directory on your server that houses over 300,000 csv files. These files are automatically created by an automated process related to your SQL Server operations and are automatically created when the process runs. The process runs on a schedule every 15 minutes.

Now suppose that you have been tasked with figuring out which of the files have data of value in them and which can be ignored. The files that can be ignored will all be either 0k or 1k in size. However, some of the 1k files can also be ignored because they only contain a header. Every csv file should have a header. So, we must figure out a way to filter out those files that are 1k or less and only have a header row.

Filtering out these files with this criteria will allow for us to find files that have useful data in them. Now why do you need to figure out which ones have useful data? Let’s just say that maybe some of the files were created with some bad data in them and you need to figure out which files may have bad data so those particular files can be regenerated. As luck would have it, the automated process does not have any sort of logging and does the bare minimum to create the files in the first place. (Sometimes you just inherit a flawed process where inadequate thought was given.)

Finding the Files

The hard part has been done at this point. At least you know some of the attributes that will help distinguish wanted files from the unwanted files. Sorting through these files by hand could be rather bothersome and cumbersome. The trick here is to find a way to search all of these files quickly and filter easily the bad from the good. Sounds like an opportunity for another automation or script. I have just the powershell script for that.

With this script, I have a couple of things to help me find files of interest quickly. First, I have a filter in place to allow me to search a specific number of days worth of files. Second, I have a filter in place to ensure it only returns file names where there is more than just a header row present.

Now, instead of spending hours perusing files trying to find something of value, I have reduced my time spent to just a mere fraction of that. Next steps after this would be to go and add some additional logging and better robustness to the initial process to reduce the chance of bad data being put into the files in the first place.

Put a bow on it

Automated processes are fantastic. We as DBAs strive to have more tasks automated than not. That said, automation without planning is just creating a time sink later on in the job. At some point, you may need to employ some powershell script similar to this in order to find where your automated process has gone wrong (if you did not plan that process well enough from the start).

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the sixth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

What is the Merit of the Job?

Categories: News, Professional, SSC
Comments: No Comments
Published on: December 29, 2019

For the past several months I have been pondering over the topic of meritocracy. Meritocracy seems to be a hot button topic that goes between hot and cold cycles in various circles and climates for whatever reason. There seem to be quite a few political fires that can be easily stoked with this topic. For me, I haven’t been thinking of it from a political standpoint. For me, I have been pondering how the rat race of meritocracy has affected me unwittingly.

What is meritocracy? Well, there are a great many ways to describe this idea, but let’s just stick to the base definition.

Meritocracy: a form of social system in which power goes to those with superior intellects or ability.

I have no doubt that there is an absolute need for meritocracy in certain situations. For example, as a sports coach, you want your best athletes on the field in order to compete. The best athletes are determined through a series of assessments and monitored efforts and performances. They have earned their way into that position based on merit. Let’s be honest here, as spectators we also want to watch the best athletes and not the scrubs else we wouldn’t pay money to attend. It is a disservice in certain situations to not award those with the greatest ability the merit of more playing time in an athletic competition.

Serious Flaw

When we start employing a merit system to our personal lives or to our professional goals, things can get a little dicey. One of the major attributes of a merit based system is extreme competition. I view competition as a good thing. It keeps me on my toes and progressing personally. That said, there is a hidden undesirable impact that comes from extreme competition – and that impact affects interpersonal relationships.

As IT professionals (and the running joke even in IT is that DBAs are far worse than everybody else) we are far too often introverted. We are internally driven to achieve certain levels of success. Along with that internal drive, we far too often focus on the merits of the job that affect our career status and seldom pay adequate attention to those merits that affect our personal relationships and feelings as human beings.

You see, meritocracy is a rat race. In a meritocracy we are driven for results and in business the results frequently come at the cost of human interactions and relationships. How well do you know the people around you? Have you taken much effort to try and establish a rapport with your co-workers or clients – beyond the ones that sign the checks?

This is the serious flaw in a merit based system. One tries to climb the ladder of success but at the cost of being an approachable human being. Nice people don’t need to finish last. Maybe your merit review should have a line item about building relationships of trust and determine how well you have succeeded at that requirement (did you exceed expectations?)!

Put a bow on it

Merit based systems certainly have their place in business and in life. How well we govern our personal interaction with the merit based system and how we conduct our personal growth should matter more than most other achievements in life. Interactions with people is crucial to our growth as a professional and as a human being.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the fifth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

The Gift of the SPN

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the last opportunity for the year to contribute to this long-standing tradition. This month being right in the middle of the holiday season, we have an opportunity to explore how we have been blessed over the past year or so.

I personally like the topic. I have one comment concerning the topic before diving in too far. I feel it wise to reflect upon how you may have been blessed in various ways on a far more regular basis. Doing so just once a year is far too infrequent. Thanks to Mala (b | t) for bringing this topic up to remind us of how important it is.

In Malas words:

This is a time for material gift giving, for many of us. It might also be a time to consider the many gifts we have received through the year, and perhaps use this opportunity to appreciate people or situations that we were blessed with.

SPN

Service Principal Names (SPN) are used by Kerberos to help associate a service instance to a service logon account. SPNs are useful and powerful little things. That said, sometimes they can cause wonderful little issues with connectivity to SQL Server. One of the more common issues is an error message such as this:

Cannot Generate SSPI Context

As circumstances would have it, the cause of this issue is stated in a kb article as follows.

A Service Principal Name (SPN) for the SQL Server Database Engine may either be missing, misplaced, or is a duplicate to other SPNs configured in the Active Directory of the domain.

I have known many people run out of hair dealing with SPN issues and the dreaded SSPI context error. Gratefully, there are ways to resolve some of these issues and make life just a touch easier. One of the easier ways is a lesser known tool that you can download from Microsoft – here.

The tool is fairly simple to install and use. The one sticking point for using it is that you need to know where to find it so you can use it after it is installed. Here is the path to the executable.

%SystemDrive%:\Program Files\Microsoft\Kerberos Configuration Manager for SQL Server

The link mentioned provides the path and the install instructions along with more info on on how to use the tool from the command line. Personally, I use the tool to discover my SPNs on the server in question and then if I might be missing any.

Upon opening the tool, you are greeted with this functional screen.

From the welcome screen, click on the “Connect” menu option. From there a new screen will open that prompts for the server name, service account and password. However, if you are just connecting to the local server then none of that is required and you can just proceed. After a few minutes, and with a bit of magic, the SPNs are evaluated and a report is generated. Along with this report, there is even an option to “Fix” the missing SPNs if they exist.

This tool really takes some of the room for error out of the equation and makes the job fairly easy. I recommend it. I have shared this with some sysadmins and they found it to be very helpful and time saving. Check it out!

Wrapping it Up

Every once in a while there is an extremely valuable tool that comes along. While the footprint and use frequency of this tool may not be that big, the tool is essential to making the DBA job easier to do. I recommend getting this tool – especially if you have ever run into SSPI issues.

In this article I showed the Kerberos Configuration Manager Tool for SQL Server to help troubleshoot SSPI Context errors and wayward SPNs. Another exceptionally useful tool is Extended Events. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Why oh Why? Effects of Antivirus on a DB Server

TSQL Tuesday

*shrug* my post is late due to technical issues.

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, we are looking at the 10 yr anniversary of this long standing blog party / tradition. For the 10 yr party, everybody has the opportunity to bag elaborate on something questionable they have seen being done as it relates to SQL Server.

This is a topic that I know has sparked a lot of commentary by the host Wayne Sheffield (b | t). In Wayne’s invite, he says “Not too long ago, I ran across a situation where I was scratching my head, wondering why something had been implemented the way it had been (you can read about it here). And that gave me the idea for this T-SQL Tuesday topic.

Take a moment to read about his “aha this is odd” moment and then you can totally see the tone that was intended for this months topic.

Dubious Backups

My topic (be it ever so late) is about some rather large frustrations with phantom backup processes that were interfering with the normal backup operations, production performance, and were running at random intervals throughout the day sometimes as much as 12 times during normal business hours.

Every single time these backups kicked off, we would get I/O frozen alerts and failed logins until the I/O was unfrozen. Sometimes it would last as little as seven seconds and sometimes lasting for an uncomfortable minute or so.

In working with the sysadmins they kept insisting that it must have been something configured within SQL Server. Each time they insisted, we would have to rehash the scheduled backups in SQL Server as well as validate any windows scheduled tasks. Here is what some of those backups looked like from within SQL Server.

Each of those queries produces results similar to the following.

The preceding image clearly shows that there have been multiple full backups of the DBA database each day and that there is a variable quantity of these backups each day.

In this preceding image we see more details about these full backups that have been occurring. We know that the backups are not coming from the SQL Service account. We also know that these are not SQL backups but rather they are coming from an outside entity that is causing the backups to be sent to a virtual device. We know a virtual device is the target due to the GUID for the BackupPath instead of an actual file path.

If I take this a step further and try to trace it via an Extended Events session, I can further confirm that the backups are from an outside entity and that it appears to be something that is using the windows shadow copy service – or something like that. Let’s look closer at that.

League of Shadows

Here we see the properties for the volume shadow copy service for all volumes (including those involved with SQL Server). Intriguing to see that despite the service being disabled for every volume, there appears to still be something causing there to be storage consumed towards the VSS quota. This supports that something is causing the backups to be performed via VSS somehow but it is not the actual Windows service because it is disabled for each volume.

Let’s investigate further. Time to break out some command line queries.

Running that command, I can compare all scheduled tasks and their run times against the actual backups that were occurring and determine if any might be related to the backups. As it turned out, none of the tasks in Task Scheduler ran at any time remotely close to these backups.

Next, let’s look at the VSS configurations more closely. I can query the providers, shadows and shadowstorage.

Looking at the providers first to determine if there is something there, we might be introduced to something like this output.

Maybe there is an aha for you at this moment. Two services are certainly worth investigating: “CommVault VSS Hardware Provider Service” and “Galaxy VSS Provider Service” (both are related to CommVault). In this particular case, we exhausted those avenues and were able to determine that neither was related to this issue.

Let’s try the shadows.

A couple of things of note here. The timestamps from these vss shadows are consistent with FULL database backup time stamps in SQL Server. Next, we see that these are designed to serve as an “ApplicationRollback” and Differential backup. So, something is taking an application consistent differential backup every so often for some reason yet unknown. Despite the VSS differential backup, it is not a differential backup inside SQL Server – it is a FULL backup. This configuration can be detrimental to your recovery plan if you are unaware of it (some ramifications here).

So, what is causing this? Well, one more clue actually helped us figure it out. The issues started on August 20th. On that day, a new Antivirus software was rolled out to the servers. This software, unbeknownst to the sysadmins, was taking VSS snaps to try and protect against ransomware – supposedly. I have to say, that is a pretty dumb thing to do to a database server. A legitimate backup / recovery plan would suffice in the case of a hijacked OS and without the added burden of the sanpshots from the AV software nor the added stress of having something screw with the backup/recovery plan.

This is a prime example of using a wrecking ball to hammer in a screw. All in the guise of security and protection. Please stop the insanity.

Making matters more difficult is the fact that the GUIDS provided by the VSS storage and what we see listed as the virtual device in SQL Server do not correlate directly to each other. You have to base your assertions off of timestamps.

If your sysadmins insist on having a tool (whether it be Antivirus or a backup solution) that causes VSS snaps to occur which could foul up the database recovery plan, then I recommend you insist on having access to that tool.

Wrapping it Up

I am not a fan of tools that interfere with the database recovery plans that may be requisite to properly meet RTO and RPO. At best these phantom backups cause undue headache in troubleshooting. At worst, they make it impossible to recover in the event of a database related disaster. Sadly, the database is regarded as the red-headed step child and these tools ignore best practices for the database world. Forcing a database server to fit in the same mold as a file or print server is flat out absurd and these tools make no effort to educate the sysadmins of the piss-poor practices they are forcing down the throat of your database server.

In this article I showed various tools in the command line as well as within SQL Server to troubleshoot these problematic backups. One of the tools was an XEvent Session. If you are in need of a little tune-up for your XE skills, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Event Files on Linux

TSQL Tuesday

The second Tuesday of the month is upon us once again. That means it is time for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

This month, just about every SQL Server Data professional is being asked to get out of their comfort zone and talk about Linux (there are some out there that are very comfortable with Linux, but the vast majority are not 😉 ).

This is a topic that is near and dear to the heart of our organizer Tracy Boggiano (b | t). In Tracy’s invite, she says “While I know it takes a while to adopt new technologies; I was wondering what it would take for people to adopt SQL on Linux.  Alternating I’m offering up for you to blog about what everyone should know when working with SQL on Linux or anything else related to SQL running on Linux.

That pretty much leaves the door wide open, right?

Event Files

For the most part, things work the way you might expect them to work in windows – except it is on Linux. Sure some things are different, but SQL Server itself, is largely the same. That similarity, for the most part, boils all the way down into Extended Events (XEvents) as well. There is one significant divergence, however. And that is when it comes to specifying your event_file target for a new session. Let’s dive into that just a bit.

Let’s take a common setup for an XEvent session.

This will fail before the query really even gets out of the gate. Why? The proc xp_create_subdir cannot create the directory because it requires elevated permissions. The fix for that is easy enough – grant permissions to write to the Database directory after creating it while in sudo mode. I will get to that in just a bit. Let’s see what the errors would look like for now.

Msg 22048, Level 16, State 1, Line 15
xp_create_subdir() returned error 5, ‘Access is denied.’
Msg 25602, Level 17, State 23, Line 36
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 5: ‘Access is denied.
‘ while creating the file ‘C:\Database\XE\PREEMPTIVE_OS_PIPEOPS_0_132072025269680000.xel’.

Let’s resolve the folder issue. I will create the Database folder (from a console session on the server), and then I will take ownership of that folder.

From there, it is also advisable to grant permissions to this folder to the SQL group via the chgrp command. Once done, re-running the entire session creation will magically work – including that windows based create subdir proc.

Alternative #2

Maybe we just want to do things via the default method. If so, we can do this and it just works.

Alternative #3

And of course, there is always this option. Unfortunately, this means keeping multiple session scripts in source control in order to maintain the differences between Windows and Linux as illustrated in these two examples.

With all three sessions now on the server, I can query my target paths to confirm the correct paths have been used.

This produces the following results for me.

Perfect, everything looks to be working and properly configured. Well, except for that session that is using the default directory – gag.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to create an Event Session on SQL Server that is running on Linux. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

Database File Changes

Data professionals around the globe are frequently finding themselves occupied with figuring out why and when a file (data or log) for a database has changed in size. Whether that change is a growth or shrink, or if the change was expected to happen or not.

I have written previously about the need to monitor these changes and how to do it more efficiently (there is even a fail-safe that occasionally works. As SQL Server improves, so does our ability to capture these types of events.

Given that SQL Server has undergone many enhancements, let’s take a look at the enhancements related to capturing the database file size changes.

Database File Related Events

If you read the articles mentioned previously, you will have encountered a previous article that include an Extended Event session that can be used for tracking file changes. That session is really good enough in most cases. One major change that I would suggest off the top is the asynchronous file target. While that target will work on all editions of SQL Server since 2008, the name of the target was changed in SQL Server 2012 to event_file. No big deal there. The XEM file is also no longer necessary, so that piece can just be wiped out.

That said, what else has changed? Let’s cover some deprecated events that may get you frustrated if you encounter them. The following events should be avoided because they will do nothing (a couple of them have been covered in this previous article).

  • sqlserver.databases_log_growth – Databases log growth
  • sqlserver.databases_log_file_size_changed – Databases log file size changed
  • sqlserver.databases_data_file_size_changed – Databases data file size change
  • sqlserver.auto_shrink_log – Auto shrink log ended.

Each of the preceding events have been replaced by a single event called database_file_size_change.

And then there is this one that I can’t seem to get to generate any events but it might possibly still be valid. In short, don’t be surprised one way or the other if it does something.

  • sqlserver.auto_shrink_log_caught_exception – Auto shrink log caught an exception.

Great, we have some events we can avoid. Now let’s look at some events we should consider.

  • sqlserver.database_file_size_change – Occurs when any of the data or log files for a database changes size. Use this event to monitor changes to the size of database files.
  • sqlserver.databases_shrink_data_movement – Databases shrink data movement
  • sqlserver.databases_log_file_used_size_changed – Databases log file used size changed (this one gets noisy – use judiciously).
  • sqlserver.databases_log_shrink – Databases log shrink

Now that we have some events that are viable as well as some events to avoid, let’s put a session together.

All we need to do at this juncture is test the session.

Let’s drill into the database_file_size_change event and take a closer look.

There are a few things going on with this event. As previously mentioned, this event captures multiple different types of size related events. If you now look at the sections that I have circled in red, you will note that there is a flag that tracks if the event was an automatic size change. It doesn’t matter if it was a growth or shrink, both can be automatic. The way we can tell if it was an automatic shrink is due to the negative value in the size_change_kb field.

Next up, we have the green highlighted section. I have three distinct timestamps circled and separated by a dotted red line. Note the time difference between each of the groups. Each group of events is separated by 30 minutes. As it turns out, if you have Autoshrink enabled on your database, the timer is a 30 minute interval. Yes! Autoshrink just so happened to be enabled on this database – for testing purposes.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to use Extended Events to monitor for file size changes and the same principles can be applied to any of the waits you may need to investigate. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

«page 1 of 5

Calendar
January 2020
M T W T F S S
« Dec    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Thursday, January 23, 2020