Connect To SQL Server – Back to Basics

The first critical task any data professional should ever learn how to do is how to connect to SQL Server. Without a connection to SQL Server, there is barely anything one could do to be productive in the data professional world (for SQL Server).

Yet, despite the commonality of this requirement and ease of the task, I find that there is frequent need to retrain professionals on how to connect to SQL Server. This connection could be attempted from any of the current options but for some reason it still perplexes many.

Let’s look at just how easy it is (should be) to connect to SQL Server (using both SQL Auth and Windows Auth).

Simplicity

First let’s take a look at the dialog box we would see when we try to connect to a server in SQL Server Management Studio (SSMS).

Circled in red we can see the two types of authentication I would like to focus on: “Windows Authentication” and “SQL Server Authentication”. These are both available from the dropdown box called Authentication. The default value here is “Windows Authentication”.

If I choose the default value for authentication or “Windows Authentication”, I only need to click on the connect button at the bottom of that same window. Upon closer inspection, it becomes apparent that the fields “User name:” and “Password:” are disabled and cannot be edited when this default value is selected. This is illustrated in the next image.

Notice that the fields circled in red are greyed out along with their corresponding text boxes. This is normal and is a GOOD thing. Simply click the connect button circled in green and then you will be connected if you have permissions to the specific server in the connection dialog.

Complicating things a touch is the “SQL Server Authentication”. This is where many people get confused. I see many people trying to enter windows credentials in this authentication type. Sure, we are authenticating to SQL Server, but the login used in this method is not a windows account. The account to be used for this type of authentication is strictly the type that is created explicitly inside of SQL Server. This is a type of login where the principal name and the password are both managed by SQL Server.

Let’s take a look at an example here.

Notice here that the text boxes are no longer greyed out and I can type a login and password into the respective boxes. Once done, I then click the “Connect” button and I will be connected (again, presuming I have been granted access and I typed the user name and password correctly).

What if I attempt to type a windows username and password into these boxes?

If I click connect on the preceding image, I will see the following error.

This is by design and to be expected. The authentication methods are different. We should never be able to authenticate to SQL Server when selecting the “SQL Server Authentication” and then providing windows credentials. Windows credentials must be used with the “Windows Authentication” option. If you must run SSMS as a different windows user, then I recommend reading this article.

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Connecting to SQL Server is ridiculously easy. Despite the ease, I find myself helping “Senior” level development and/or database staff.

If you feel the need to read more connection related articles, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

An Experiment with Deadlocks

Everything can be fixed with a query hint (*cough* directive), right? If a certain process is consistently causing deadlocks, a simple ROWLOCK hint can be added to prevent it, right?

Well, for whatever reason, there seems to be a myth out there that when deadlocks come a-knocking, then just throw this little directive at it and all will be well. Today, we get to test that and show what will continue to happen.

First, lets look at what the ROWLOCK actually means:

Specifies that row locks are taken when page or table locks are ordinarily taken.

This seems like a fabulous idea if the deadlocks are occurring against a table involving a massive update. Let’s take a look at a small update scenario involving just a handful of records. For the setup, we will use the same setup used in a previous article by Wayne.

Looking at the code, we can see there are only five rows in each of the tables. In addition, an update will be performed to both col1 and col2 in each table for one specific row. So we are keeping this to a singleton type of update, and we are able to force a deadlock by using this setup. Not only do we see that a deadlock will occur consistently, we would see the following in the sys.dm_tran_locks DMV as well as in the deadlock graphs.

In my environment, I used session 51 and 54 consistently for the deadlock repro. In each of the tests, each spid did obtain page locks as well as RID locks (row locks) that were waiting on each other in order to perform an Update. This is what we saw when I ran the setup without the ROWLOCK directive. What if I used the ROWLOCK directive (are you guessing already that there would be no change because the deadlock occurs on the lock held for the update that is waiting on the other update to complete?)? Let’s take a look at that too!

The only change to this setup is that the ROWLOCK directive has been added to the update statements. Examining the sys.dm_tran_locks DMV reveals the same locks being held as was seen without the directive. This shouldn’t be too big of a surprise since the updates are against a single row.

In addition to the same locks being held, we continue to experience the same deadlock problem. Using an Extended Events session to trap deadlock information (similar to the article previously discussed), we can pull out some pretty good info. Let’s examine some of the deadlock data trapped by an XE session.

The results from this query will show us the deadlock graph, the event data, as well as several other pieces of data already parsed from the session data for you. And looking at the session data, one can see that the sql_text from each of the queries will demonstrate both the ROWLOCK directive and the directive-free versions of the query. In this query you can also see that I did a little black magic to match up the two event types from the event session (lock_deadlock and xml_deadlock_report). Then I was able to join the two together to produce one row per deadlock event and to see the sql_text with the deadlock graph on one row. Otherwise, the sql_text does not produce with the deadlock_report event. I leave the rest of the query to the reader to discover and question.

From the EventDeadlockGraph column, we could click the cell and take a close look at the XML generated for the deadlock event. Further, if I choose to save the xml as an XDL file and then reopen it in SSMS, I can see the deadlock graphical report as shown in the following.

We see that row locks are still in effect for the update coming from both sides. This further supports that the directive really is just a waste of time in trying to combat this type of deadlock. This is one of those cases where the best option would be to optimize the code and work things out without trying to take a shortcut.

Wrapping Up

Look to optimize the code instead of trying to take a shortcut. In addition, take a look at the deadlocks, the locks held, and the code to get a better understanding of what is truly happening.

This article demonstrates briefly the power of Extended Events while diving into deadlocks. For more on using Extended Events, start reading here! This article may also be of interest.

The Curiously Large CASE Statement

 

A powerful little expression in SQL Server is the CASE statement. The CASE statement has the ability to “transform” data per business rules to help better understand the data in a given query. In the simplest form, the CASE statement is a statement that evaluates a list of conditions (in order) and returns the first matching result expressions.

Most uses of the CASE statement are benign enough that one wouldn’t even bat an eye at them. Every now and again, along comes a nifty little query that contains over 300 conditions in the CASE statement. Something that extreme certainly deserves a second (third or even fourth) look. In this article, I am going to explore one such CASE statement in an effort to squeeze more efficiency out of it.

300!

300+ conditions may be a bit hard to imagine. I know I had to double take the list more than once. Immediately upon seeing such an awful piece of code (not pulling punches there), I really wanted to dump the conditions/evaluations into a table and join to it. Eventually, I did put it into a table but not before catching some metrics. Let’s take a look at what this thing looked like from the execution plan properties perspective.

In the preceding image, note the query map in the scrollbar. I have the case statement circled in green. Yes – it is really that large. Next, I have “cached plan size”, “estimated number of rows”, and “estimated subtree cost” circled in red. Each of these shows something that is a little alarming to me. First, the plan size is 48MB! Next, the cost of the query is a little elevated at 51 and change. Last is what seems to be a high value for the estimated number of rows to be returned (only ~6k actually get returned).

On the plus side, the query only takes 381ms, or does it? The query actually takes 3 seconds to complete despite the timing in the properties window. Sometimes, we have to take some of these values in the properties window with a grain of salt.

What if I try to run this query on SQL Server 2017 instead of SQL Server 2014? Let’s take a look at what that might look like.

Firstly, we have a few properties that seem to indicate that we have moved in a slightly better direction with this particular query. The cached plan size has dropped all the way down to just 37MB. The estimated cost dropped to 36 and change and the elapsed time dropped to 353ms. Nothing terribly earth shattering there – but just changing to SQL Server 2017, in this case, we see a bit of an improvement.

That said, take a look at the compile memory and compile time. Memory is 30,544kb and the time is 1,324ms. Aha – we see why this query takes 3 seconds on SQL 2014 and roughly 2 seconds on SQL 2017. High compile times. That memory for the compile is too high for my liking as well so let’s go ahead and take a look at why that might be a little high. It’s time to look at the plan XML and the converted tree for the query (because it is fun).

This is where we get a little geeky with the CASE statement. In the execution plan XML (and the converted tree we will see in a bit), every evaluation criteria for the case statement must have some sort of instruction on how and what should be done. In the case of the XML, we have multiple scalar operators to denote the CASE statement and then each of the evaluations for the CASE statement.

Once we enter the CASE statement, the first thing we see is that there is an “OR” operation (circled in red). Then for each of the values to be compared, we will see an “Equality” operation (circled in green) followed by the value to be evaluated. For just the three values I have in the image, notice how that immediately bloats the execution plan XML for this query.

In the preceding image, we see the converted tree for this query. We see the same pattern evolve here. There is a scalar operator for each value to be evaluated. In this image, I just have the equality comparisons. However, there is also an OR operator that is defined further up in this converted tree. Could this possibly be the reason for the bloated compile time and compile memory?

Let’s evolve that query just a bit and use a table and a join for that particular case statement to see what happens.

Immediately, we can see that the cached plan size improves dramatically! The cached plan size is down to 1.1MB. The compile memory is down to ~4MB and the compile time is down to 174ms. These are all significant improvements but now we see an interesting problem with the memory grant. This query is requesting over 400MB for the memory grant. That should be sounding sirens everywhere. Also of note is the estimated number of rows has dropped to ~28k (a significant improvement in estimate). This query is completing in under 1 second at this point compared to the 3 seconds otherwise but that memory grant is worrying me (and hopefully you too).

For giggles, I went ahead and added a columnstore index to this table (for good reasons beyond the scope of this article). Let’s see what that did.

Notice how the memory grant has dropped to just over 28MB? This is an acceptable improvement for this query. Also note the query execution time improved by a decent amount. This query timing for this query is similar with both a warm cache or cold cache at this point.

Remember that note about the XML and converted tree bloat? With the case statement, the query plan was in excess of 37MB and the revised version is sitting at about 1.1MB. Looking at the quantity of text in each, I see 5819 lines of XML in the version with the case statement and only 3755 lines of XML in the table version of the query (and more room for optimization). The converted tree is more dramatic in that the case statement version of the converted tree requires 3157 lines of instruction and the version that uses a table instead only has 1353 lines of instruction.

What does any of that matter? It helps to illustrate that this 300+ condition filled CASE statement is causing bloat and wasting space along with execution resources.

Wrap

If you ever find yourself writing a query that requires an excessive amount of conditions to be evaluated in a CASE statement, you should reconsider the query design. Too many conditions can lead to plan bloat and excessive memory grants. Keep an eye out for stuff like that!

Thanks for reading! This has been a pseudo deep dive into a really ugly CASE statement. If you would like to read other articles that are less of a deep dive, You can read check out the Back to Basics articles here, or the Extended Events articles here.

Disk Space and SQL Server

Disk Space in SQL Server

One of the frequently required job functions of the database administrator is to track disk space consumption. Whether this requirement comes from management or from a learning opportunity after a production outage, the need exists.

As a hard working DBA, you want to make sure you hit all of the notes to make management sing your praises. Knowing just when the database may fill the drives and prevent a production outage just happens to be one of those sharp notes that could result in a raise and management singing hallelujah. The problem is, how do you do it from within SQL Server? You are just a DBA after all and the disk system is not your domain, right?

Trying to figure it out, you come across a pretty cool function within SQL Server. The name of the function is sys.dm_os_volume_stats. Bonus! This is an excellent discovery, right? Let’s see just how it might work. First a sample query:

If I run that on my local system, I might end up with something that looks like the following:

Looking at the image you may be wondering to yourself right now why I have highlighted a couple of things. You may also be wondering why I used the word “might” in the previous paragraph as well. The reasoning will become more evident as we progress. For now, you have resolved to continue testing the script so execute it again and end up with something that may look like the following (for the same server):

Whoa! What just happened there? Why are there two listings for the C: drive? Why does each register a different value for the FreeSpace column? In addition without any additional usage on the C drive (as verified through other tools) the FreeSpace is changing between executions as well as within the same execution. This is problematic, so you continue testing:

And yet again!

This can’t be correct, can it? Just for giggles let’s modify it just a bit to see if there are any additional clues. Using the following changed script, hopefully a clue will help shed some light on this:

This script yields the following potential results:

Look at the different highlighted areas! There are three different values for FreeSpace for the C: drive in this particular execution. The case of the C: drive plays no role in whether the value is recorded differently or not. This seems to be more of a bug within the dynamic management function. From execution to execution, using this particular method, one could end up with duplicate entries but distinct values. The sort of the execution could be returned differently (though we could fix that).

All of these tests were run on my local machine and I really do only have one C: drive. I should never receive multiple entries back for any drive. If using this particular DMF to track space usage, it could be somewhat problematic if the duplicate drive data pops up. How do we get around it, you ask? Here is another example that I have that has not yet produced this duplication:

Using this version of the script is not terribly more complex, but it will prove to be more reliable. You can see I used some CTEs to provide a little trickery and ensure that I limit my results. What if it is a mount point or a non-standard drive letter? I have not tested that. Let me know how that goes. As you can see, I am restricting the drive selection by using the row_number function against the drive letter.

For alternative reliable methods to find your disk space consumption, I would recommend something different because it is tried and tested. I would recommend using a wmi call to fetch the data. Samples are provided as follows:

Easy peasy, right? Have at it and try tracking your disk space.

Thanks for reading! This has been another article in the Back to Basics series. You can read many more here. For other interesting articles about database space – check here or here.

page 1 of 1

Calendar
March 2019
M T W T F S S
« Feb   Apr »
 123
45678910
11121314151617
18192021222324
25262728293031

Welcome , today is Thursday, November 14, 2019