Making Databases Personal

TSQL2sDay150x150TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

The Why?

This month, Todd Kleinhans (b | t) invites us to share something personal about ourselves. Well sort of. The invitation to each of us is to share a little about databases that we use in our personal lives or that we have created in our personal lives to help us in some fashion or another.

I think this is a really cool idea. Data and databases are very integrated into my life. The integration is enough that it would be difficult to un-blur the line where personal life and professional life begins when talking about data and databases.

Take a moment and reflect on the importance of data and databases within your life. At a minimum, you have taken the approach that data is your profession of choice and hopefully it is a career for you and not just a job.

If you have taken it down the career path, then you probably study your craft in your personal time to try and make yourself better at your trade-craft. Often times, if you are like me, these studies lead to trial and error, experiments, and possibly rabbit holes. If data is just a J-O-B, it is quite likely none of this will apply to you. Those are the types that fall into a category Steve Jones wrote about recently, and I would urge you to change that sooner rather than later. (Side note, I read that article as if it applied to me directly because I really do need to learn and practice more about my craft too!)

Personal Databases

Over the years, I have had sooo many personal databases that I have created for one reason or another. Some still need to be finished. Here are some examples of what I use a database for in my personal life.

Finances: A simple little database to track (sort of like a checkbook yes) credits and debits. More importantly, it tracked bills and due dates with dollar amounts for each bill. This is useful both for reminders for bills but also to see if there are any noteworthy trends with spending or increased costs. Now, many banking (and credit card) sites offer the same types of services – so long as the spending is done with their card.

Exercise: Back in the day, I tracked all of my activities in a database. I would track the duration, activity type, effort level and various biometric measures after each activity. Once again, this is something that is now tracked by so many different apps and databases that it seems counter productive to maintain my own system in favor of something that works more easily on a wearable device.

I have also previously done databases to inventory books, movies, track personal studies and link thoughts about studies from day to day, or even track routine chores (calendar reminders don’t offer much for tracking details). The short of it is, a database can be created to manage so many things as a part of everyday personal life.

 

Wrapping it Up

If you can use a database to create automation routines to help sustain your work life, you can certainly do the same to sustain your personal life. Databases are an essential piece of both my personal and professional life. Not only do I use them to create automation (such as this or this) for things, but I stand databases up to test numerous things from corruption to recovery to performance tuning to help improve my personal abilities and craft in the realm of data.

 

Database In Recovery Redux

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: April 8, 2019

One of the more gut-wrenching experiences of a DBA is to have a database get stuck in recovery. This typically happens when the server was rebooted unexpectedly and typically coincides with a rather large transaction.

The reboot and sudden stop of the transaction requires the transaction to be replayed or to experience an undo. This could take hours or days depending on worst case scenarios. Sitting there and waiting for it to complete just makes you sweat.

This happens to have happened to a client recently so I dusted off an old script (which you can find if you click this link), and started investigating and monitoring the server. While waiting and investigating, I decided to take on my challenge (quoted hereafter) from the first time I posted this query (here).

Unfortunately, this query does not demonstrate the time remaining for the rollback nor the percent complete without needing to query the error log.  Those would be awesome additions if you know how to do it (and let me know), other than via the error log.

I did not remove the dependency on the error log, rather I leveraged that dependency to allow me to get more information from the DMVs and link it to the error log messages to give a bigger more complete picture of the recovery process from a simple query.

This script is now enhanced from the 2014 version and works through 2017.

Wrap

SQL Server recovery is a safeguard to protect the data in the event of an unexpected failure. The recovery process is necessary and has several phases to roll through in order to bring the database back online. It will require patience and this script can help achieve that patience.

Indexed Views – Performance Panacea or Plight

Indexed View(s)

An indexed view is a view where the result set from the query (the view definition) becomes materialized in lieu of the virtual table result set of a standard (non-indexed) view. Many times we see that an indexed view would be created to help improve performance. Far too often, an indexed view is created without consideration for the costs of the indexed view. In this article, I hope to cover some of the more important costs that are frequently overlooked when considering an indexed view as a performance panacea.

The Setup

To demonstrate the overlooked costs of indexed views, I have decided to use the AdventureWorks2014 database (if you don’t have this sample database, you can get your copy here). I will create a view in that database and then I will add a few indexes to that view. Prior to sharing the entire setup, and like all good DBAs, I need to first grab a baseline. For this baseline, I am doing nothing terribly complex. I will just grab table sizes and storage information for the entire database.

Here are my results for the size of the objects within the AdventureWorks2014 database:

These results show that the largest object in the database happens to be the Person.Person table at just about 30MB. Not terribly large, but the database is not terribly large as a whole. Let’s see what happens when I add a materialized view, based on the Person.Person table, to the mix. Here is the definition of that view along with the indexes that I will add to help the performance of some essential production related queries.

After executing all of that code to create this new view with its indexes, I have the following size results:

The creation of this view has chewed up a bunch of storage. It has jumped right up to the number two spot on the biggest objects list within this database. You can see that differences by comparing the highlighted rows to the previous image. The vPerson view is highlighted in red in this second image to help point it out quickly.

Surely this must be a contrived example and people don’t really do this in the real world, right? The answer to that is simply: NO! It DOES happen. I see situations like this all too often. Far too often, large text fields are added to an indexed view to make retrieval faster. I have mimicked that by adding in two XML columns from the Person.Person table. This is definitely overkill because a simple join back to the table based on the BusinessEntityID would get me those two columns. All I have effectively done is duplicated data being stored and I have achieved that at the low low cost of increased storage of 25% for this small database. If you are curious, the column count between the Person.Person table and this new view is 13 columns each.

I call the 25% increase storage cost a significant increase. An increase of 25% in storage for a single materialized view is not an uncommon occurrence. I have seen multi-terabyte databases with 25% of the total data storage being caused by a single view. If you are tight on storage, any unexpected data volume increase can cause a non-linear growth in data requirements to occur just because of the presence of indexed views. Take that into consideration when presented with an option to either performance tune the code or to take a short-cut and create an indexed view.

Band-aid Performance

I alluded to the use of indexed views as a cog to help improve performance. This is done by taking the query that sucks and turning it into a view. The thought is that it will run faster because an index is created specifically for that data set. Generally the performance gains from using an indexed view, to camouflage the bad query, are seen when the query is complex. Unfortunately, the query I provided for this article is not terribly complex. Due to the simplicity of the first example, let’s look first at the execution plan for another query that runs terribly slow but is also not terribly complex:

There are a few table spools and index scans. The most costly operator of the bunch appears to be a merge join with a many-to-many join operation. The indexed view in this case does give me a very significant improvement on this query (look at that estimated cost and then the estimated number of rows). What if I could tune the query a little and avoid the all of storage cost? Let’s see!

Consider the base query for this new view:

Looking at the query, you may be asking yourself why use so many trim functions on each of the joins? This is an extreme that can be easily fixed. None of the fields in the joins should need to be trimmed since they are all numeric in this case. Starting there and removing all of the function use in the joins should provide significant improvements in query speed and without the use of a view. Testing it out would yield similar performance results to the first view in this article. This small change would cause the query to complete in 1 second compared to about 90 seconds. That is a huge gain, and without the added cost of increased storage use.

Do I really see stuff like this in the field? Yes! Spending a little time to fix the root of the problem rather than looking for a quick fix and performance can be spectacular. There are times, though, that an indexed view may be an absolute requirement. That would be perfectly fine. There are times when it is warranted and there is no way around it. If an indexed view is an absolute must, then there are a couple more considerations to be taken into account.

Disappearing Act

I showed previously that the indexed view requires storage space and showed that the new view created quickly claimed its place as the second largest object within the database. Now, to stage the next segment, I will show once again that the indexes are present and consuming disk space. This can be shown via the following query:

And the results of that query should look something like the following (quick reminder that I created a new indexed view for the second demo and these results will be based on that second view):

Now that we know the index is present and consuming space, time to make a change to the view and see what happens. I am proceeding under the premise that I have determined that including the two XML columns in the view are completely unnecessary and are the cause of too much space consumption. I can reap the benefits of the view at a fraction of the cost if I ALTER the view and remove those columns. So I will proceed by issuing the following ALTER statement:

Take note of how long this ALTER command takes to complete – nearly instantaneous.  That rocks! Right? Let’s look at the indexes to make sure they are still in working order and don’t require a defrag. Based on that, I will re-run that query to check the indexes and sizes – for giggles.

This time when the query completes, I get a nice little surprise: I no longer have any indexes on that view. Changing the view definition instantly drops any indexes that were built on that view. If this is done in a production system, imagine the implications and problems that could arise.

Conclusion

In this article, I have covered indexed views and three different considerations that are often overlooked when looking to implement a materialized view. The three considerations for indexed views, as discussed, are: storage, changes to the view, and the band-aid effect. Overlooking the cost of storage can have serious repercussions on the production environment. Implementing an indexed view without attempting to tune the code first could lead to overlooking the storage cost and could end up just being a waste of resources. The last tidbit is that any change to an indexed view will drop the indexes. That is an easily overlooked feature of indexed views. If you forget to recreate the indexes on the view after making changes, you could be looking at a production outage.

There are a couple more critical considerations for indexed views that are often overlooked as well. These additional considerations are blocking, deadlocking, and maintenance. These considerations will be discussed further in a future article.

For more reading enjoyment try the Back to Basics or Xevents series.

Connect To SQL Server – Back to Basics

The first critical task any data professional should ever learn how to do is how to connect to SQL Server. Without a connection to SQL Server, there is barely anything one could do to be productive in the data professional world (for SQL Server).

Yet, despite the commonality of this requirement and ease of the task, I find that there is frequent need to retrain professionals on how to connect to SQL Server. This connection could be attempted from any of the current options but for some reason it still perplexes many.

Let’s look at just how easy it is (should be) to connect to SQL Server (using both SQL Auth and Windows Auth).

Simplicity

First let’s take a look at the dialog box we would see when we try to connect to a server in SQL Server Management Studio (SSMS).

Circled in red we can see the two types of authentication I would like to focus on: “Windows Authentication” and “SQL Server Authentication”. These are both available from the dropdown box called Authentication. The default value here is “Windows Authentication”.

If I choose the default value for authentication or “Windows Authentication”, I only need to click on the connect button at the bottom of that same window. Upon closer inspection, it becomes apparent that the fields “User name:” and “Password:” are disabled and cannot be edited when this default value is selected. This is illustrated in the next image.

Notice that the fields circled in red are greyed out along with their corresponding text boxes. This is normal and is a GOOD thing. Simply click the connect button circled in green and then you will be connected if you have permissions to the specific server in the connection dialog.

Complicating things a touch is the “SQL Server Authentication”. This is where many people get confused. I see many people trying to enter windows credentials in this authentication type. Sure, we are authenticating to SQL Server, but the login used in this method is not a windows account. The account to be used for this type of authentication is strictly the type that is created explicitly inside of SQL Server. This is a type of login where the principal name and the password are both managed by SQL Server.

Let’s take a look at an example here.

Notice here that the text boxes are no longer greyed out and I can type a login and password into the respective boxes. Once done, I then click the “Connect” button and I will be connected (again, presuming I have been granted access and I typed the user name and password correctly).

What if I attempt to type a windows username and password into these boxes?

If I click connect on the preceding image, I will see the following error.

This is by design and to be expected. The authentication methods are different. We should never be able to authenticate to SQL Server when selecting the “SQL Server Authentication” and then providing windows credentials. Windows credentials must be used with the “Windows Authentication” option. If you must run SSMS as a different windows user, then I recommend reading this article.

The Wrap

Sometimes what may be ridiculously easy for some of us may be mind-blowing to others. Sometimes we may use what we think are common terms only to see eyes start to glaze over and roll to the backs of peoples heads. This just so happens to be one of those cases where launching an app as a different principal may be entirely new to the intended audience. In that vein, it is worthwhile to take a step back and “document” how the task can be accomplished.

Connecting to SQL Server is ridiculously easy. Despite the ease, I find myself helping “Senior” level development and/or database staff.

If you feel the need to read more connection related articles, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

An Experiment with Deadlocks

Everything can be fixed with a query hint (*cough* directive), right? If a certain process is consistently causing deadlocks, a simple ROWLOCK hint can be added to prevent it, right?

Well, for whatever reason, there seems to be a myth out there that when deadlocks come a-knocking, then just throw this little directive at it and all will be well. Today, we get to test that and show what will continue to happen.

First, lets look at what the ROWLOCK actually means:

Specifies that row locks are taken when page or table locks are ordinarily taken.

This seems like a fabulous idea if the deadlocks are occurring against a table involving a massive update. Let’s take a look at a small update scenario involving just a handful of records. For the setup, we will use the same setup used in a previous article by Wayne.

Looking at the code, we can see there are only five rows in each of the tables. In addition, an update will be performed to both col1 and col2 in each table for one specific row. So we are keeping this to a singleton type of update, and we are able to force a deadlock by using this setup. Not only do we see that a deadlock will occur consistently, we would see the following in the sys.dm_tran_locks DMV as well as in the deadlock graphs.

In my environment, I used session 51 and 54 consistently for the deadlock repro. In each of the tests, each spid did obtain page locks as well as RID locks (row locks) that were waiting on each other in order to perform an Update. This is what we saw when I ran the setup without the ROWLOCK directive. What if I used the ROWLOCK directive (are you guessing already that there would be no change because the deadlock occurs on the lock held for the update that is waiting on the other update to complete?)? Let’s take a look at that too!

The only change to this setup is that the ROWLOCK directive has been added to the update statements. Examining the sys.dm_tran_locks DMV reveals the same locks being held as was seen without the directive. This shouldn’t be too big of a surprise since the updates are against a single row.

In addition to the same locks being held, we continue to experience the same deadlock problem. Using an Extended Events session to trap deadlock information (similar to the article previously discussed), we can pull out some pretty good info. Let’s examine some of the deadlock data trapped by an XE session.

The results from this query will show us the deadlock graph, the event data, as well as several other pieces of data already parsed from the session data for you. And looking at the session data, one can see that the sql_text from each of the queries will demonstrate both the ROWLOCK directive and the directive-free versions of the query. In this query you can also see that I did a little black magic to match up the two event types from the event session (lock_deadlock and xml_deadlock_report). Then I was able to join the two together to produce one row per deadlock event and to see the sql_text with the deadlock graph on one row. Otherwise, the sql_text does not produce with the deadlock_report event. I leave the rest of the query to the reader to discover and question.

From the EventDeadlockGraph column, we could click the cell and take a close look at the XML generated for the deadlock event. Further, if I choose to save the xml as an XDL file and then reopen it in SSMS, I can see the deadlock graphical report as shown in the following.

We see that row locks are still in effect for the update coming from both sides. This further supports that the directive really is just a waste of time in trying to combat this type of deadlock. This is one of those cases where the best option would be to optimize the code and work things out without trying to take a shortcut.

Wrapping Up

Look to optimize the code instead of trying to take a shortcut. In addition, take a look at the deadlocks, the locks held, and the code to get a better understanding of what is truly happening.

This article demonstrates briefly the power of Extended Events while diving into deadlocks. For more on using Extended Events, start reading here! This article may also be of interest.

«page 2 of 129»

Calendar
May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Sunday, May 26, 2019