HealthySQL – A Review

How well do you know the health of your SQL Servers and databases? Ever wonder about the current health or even what the health was like a few months back? Sure, one could anecdotally say the databases and servers are healthy, but how do you quantify it?

Many have asked or pondered what to do to quantify a healthy SQL server. How do you get the hard numbers to show whether the server is or is not healthy? How do you get the numbers to show that the server is performing just as well or better than it did three months ago?

dbhealth_maintNow we have a way to answer those questions and more! Robert Pearl has finally published his book on Healthy SQL. In the book, Robert takes you on a journey from mapping out a plan, to capturing data, storing that data, and then how to report on that data. You will learn what to capture and what some of the various things actually mean (e.g. waits and indexes).

Throughout the book, you will see that Robert will introduce you to various tools. These tools can either be native to SQL Server or they could be third party tools. You will get a healthyDBgood primer on what the tool is and how to quickly use it as you become acquainted with the internals and the Health of your database and server.

As you progress through the book and learn about what makes a database healthy, you will encounter a chapter on how to store the data and make a repository to track your database health. You will want to create a repository so you can progress into the sections on reporting about your database health. Who doesn’t want to market to management how healthy the databases are?

With the tools in this book, and the repository you will create, audits and discussions will become much easier. You will be able to implement processes to help make you more aware of the environment and help keep you from getting that annoying page in the middle of the night.

Take a look at the book and enjoy.  You can get it from Amazon here.

If you are a young DBA, new to being a DBA, an accidental DBA, or just any DBA looking for a little extra help in figuring out how to check and track the health of your server, it is worth reading this book.

Transaction Log Management

Categories: Book Reviews, SSC
Comments: No Comments
Published on: October 7, 2013

I have had the distinct pleasure of being asked to review a book by RedGate.  The book is by a friend and respected community member Gail Shaw (blog | twitter).  The book is a stairways style book and is titled “SQL Server Transaction Log Management.”



I read this book with the intent to provide an in-depth analysis and technical review.  I was happy to take on the challenge of reviewing this book and happy with what I gleaned from the book.

Let’s start with my overall impression of the book.  This book is ideal for every junior DBA and every accidental dba out in the workforce.  If I need to repeat that statement I will, or you can flash back and reread that statement as many times as you need.

My next observation about this book is that it seems obvious that it was written with a community undertone.  There are numerous references to community resources and individuals who are ever-present in the community.  That community tone can be an extension of RedGate as a whole.  Community appears to be fairly important to RedGate overall.

I enjoyed reading the book, and I enjoyed the demo scripts.  An absolute must with this book is the demo scripts that can be downloaded and sampled (see page 13 for more code sample links).  They are an absolute must to help illustrate the points that Gail and Tony are making.

I do have a minor critique that is more along the lines of readability.  The first critique may be a bit zealous, but I found it difficult to read various characters such as I and 1 due to the choice of font in the paragraph text.  The numeric 1 is a different font and looks like an I in paragraph versus the numeric 1 in the chapter headings and image text that actually looks like a numeric 1.  There are a few cases where the font appeared to change as well making it seem disjointed.  Again, that is a really a minor issue and nothing related to the technical content.  An example of this font switching may be seen on Page 29 and 30.

That said, something I like to do with books such as this is try to take something of interest from each of the chapters.  So, here is a list of points of interest for each chapter.

Chapter 1

Many DBAs (part-time, accidental or even full time) along with Vendors (I am adding them to this group), do not backup their databases correctly.  Many in these groups “perform full backups on their databases, but they don’t perform transaction log backups” (page 21).

This book underscores that problem and helps show those groups how to implement transaction log backups and better manage their database transaction logs.

Chapter 2

Another item that many DBAs neglect to do is manage the log file fragmentation.  “Log file fragmentation can degrade the performance of SQL Server processes that need to read the log ” (page 41).  On some databases, I have seen this degradation cause delays in the range of 20-30 minutes or worse when bringing a database online.  Do you want to take that risk?

Chapter 3

This chapter covers a topic that I see happening on a regular basis – even with “seasoned” DBAs.  That topic is concerning databases that are in “pseudo-full.”  This mode of operation is baffling to me.  I don’t quite understand why people place a database in full recovery model and then never take a full backup of the database.  The database transaction log will continue to truncate as if it were in simple recovery model.

I also particularly enjoyed reading about some really easy ways to break a transaction log chain.  There are a few methods listed on how to break the chain, but more as a caution to not use them as opposed on showing you how to use them.  One method is the switch from Full recovery to Simple recovery to try and control a runaway log.

One method that wasn’t mentioned ( at least not until Chapter 7) was backing up the transaction log to a NUL device.  Backing up to a NUL device is just as destructive to the log chain as any method mentioned in the book.

One thing this chapter does not delve into is the implications of breaking the log chain.  By Design that is covered in Chapter 7.  One item that was not covered in great deal was that of log-shipping.  Think of the implications of a Mirrored or Log-shipped database that is in the multi-terabyte range.  Do you want to be the one to have to copy a full backup cross-country in order to rebuild log-shipping because you created a log backup to the NUL device?

Chapter 4

This chapter is summed up succinctly in the summary section of the chapter.  “If you wish to measure exposure to data loss in minutes rather than hours, don’t use SIMPLE model” (page 61).

Chapter 5

Have you ever heard the expression “measure twice cut once”?  In this chapter we see a subtle reminder of the importance to double-check these types of things.  Did you know that records in the backupset table could be manipulated?  Do you rely only the results of queries to the backupset table to confirm that backups were taken?  That just might be a mistake.

This chapter shows the need to also query sys.database_recovery_status to confirm backups of the database log.  If you are not double checking your log backups then please start doing so.

Chapter 6

This chapter covers Bulk Logged and some common uses for this chapter.  Some concerns are also illustrated in this chapter about the use of Bulk Logged in various scenarios.  There is also a discussion on pros and cons as well as Best Practice.  A major concern with Bulk Logged is discussed, and for that you should read the book.

I particularly enjoyed the demo on space savings regarding the use of Bulk Logged in this chapter.  Have a run at the demo and enjoy what you learn from this chapter.

Chapter 7

Besides the script on page 141 for finding open transaction information and the warnings about CDC, I found the section on Mismanagement to be an absolute gem.

There is a lot of mis-information available on the internet.  The information concerning transaction logs can be quite dangerous.  This section discusses some of the more prominent problems that may be found through internet searches.  The debunking of some of these dangerous myths is a must read.

Chapter 8

This chapter covers some of the Golden Rules of transaction log management.  A SQL Server Database is NOT a set it and forget type of venture.  By extension, a transaction log is even less of a set it and forget it undertaking.

One of very first things debunked in this myth busting chapter is that concerning the use of multiple transaction logs.  Simply put, don’t do it.  There is an edge case where adding a second transaction log is helpful – but once the issue is resolved the additional log file MUST be removed.

One thing you may find in your internet searches is what is deemed a best practice to manage your transaction log growth explicitly.  This chapter discusses that and demonstrates the benefits to manually managing your transaction logs.

In this chapter we also see a renewed discussion concerning log fragmentation.  Here we see more things that are affected by a highly fragmented transaction log.  To underscore the importance of proper management of your transaction log and the number of log fragments there is a very good demo script.  Check it out.

Chapter 9

To top off all the great information, we conclude with something that is all too frequently overlooked by many (especially those in the earlier defined groups).  You must monitor your transaction log!!


There is a lot of great information throughout this book.  There are also some great demo scripts throughout the book.

One final note is in regards to something that I often see vendors doing.  It was touched upon in this book and warned against.  Please do not store backups of different types or even different databases in the same backup file.  This is a practice that will almost certainly guarantee you of a larger scale disaster.  What if you have one corrupt backup within that mass accumulation of backups within the same file?

I enjoyed this short book.  I recommend it be on every junior or accidental DBAs bookshelf.  I’d also recommend it be in every employers library for use by all IT employees.

SQLServerPedia Blogger Awards

Comments: No Comments
Published on: February 16, 2011

Several moons ago I learned that I had won one of the categories for the 2010 SQLServerpedia Blogger Awards.  You can read about that here.  Of all of the promises made in that post, I have yet to fulfill two of them (one is being fulfilled now).  I can’t fulfill the other promise because I do not have a cubicle at work – just an open work area.  That said, I am prominently displaying my trophy in my work area.

As you can see in the image, I have displayed my trophy (though the photo does not prove that it is at work).

Again, I am thankful to have won and I will continue to blog and try to put out useful information to the blogosphere.

Thanks to all who voted for my entry (which was really just one part of a series).

P.S.  I have a really good reason for this post coming out so late too.  My award was delivered in the middle of me moving from Vegas back to Utah.

Drumroll Please…

Comments: No Comments
Published on: November 29, 2010

Defensive Db Programming Chapter 10

After dragging this review on for months now, it is finally time to bring this bad boy home and wrap it up.  I have been working through a chapter by chapter review of the book by Alex Kuznetsov (Blog | Twitter).  Alex has been taking us through the process and various techniques to better program defensively.  This is the last installment in that series (yes shed a little tear).  This chapter is devoted to teaching us how to better deal with Modifications that Survive Concurrency.  Some of the problems associated with this are:

  • lost modifications, a.k.a. lost updates – such problems occur when modifications performed by one connection are overwritten by another; these typically occur silently; no errors are raised
  • resource contention errors – such as deadlocks and lock timeouts
  • primary key and unique constraint violations – such problems occur when different modifications attempt to insert one and the same row. (p. 337)

Another way of stating these types of problems is that there is a “Race Condition.”  You may have heard this term thrown around by the developers from time to time.  It isn’t made up and is a very real potential problem that must be dealt with accordingly.

I can’t give this chapter enough props nor enough due through a short blog post on it.  It is a lengthy chapter and covers a very important topic (my opinion).  Alex covers each of the race conditions and continues to show how to resolve that race condition.  He uses real world scenarios as you should have grown to see up to this point.  I will point out that I had not considered using RowVerssion to help resolve some race conditions.  That is at least one thing I have learned from reading this chapter.

Another thing that struck me and made me think is related to the TSQL patterns that Fail High concurrency.  Here two methods are scrutinized and a third method is proposed.  The third method is MERGE.  That is a subject about which I could learn a lot.

For the final time (since this series has come to a close), go get the book and read it.  You can find the rest of the articles by linking back through this link.

Defensive Db Programming Chapter 09

Comments: No Comments
Published on: November 22, 2010

It is down to the final two chapters of the book by Alex Kuznetsova.  Check out the previous chapter here.  The review of this book is certainly taking longer to produce than I had planned.  However, I think you find that the book is still worth it and that this little series is also worth it.  I hope that the spread of this series is at least getting to a few more people here and there and that more people may get a copy of “Defensive Database Programming”.

This chapter is about concurrent queries and Isolation Levels.  I think these seem to be a hot button of late – at least I have been hearing more and more talk about them than in the near past.  Alex starts the chapter with a stark realization.  He states:

Even the simplest SELECT, against one table, may retrieve incorrect results when the base table is being modified at the same time. (p. 297)

Think about that statement for a bit.  I imagine it won’t take you long to realize that it is true.  I am certain that you have come up with several examples that underscore that statement.

Alex then progresses through Isolation levels giving a demonstration of what will happen under the described circumstances.  This chapter is not intended to explain the differences between the Isolation levels, but merely to demonstrate what could happen – as has been the theme throughout the book to this juncture.  These examples that Alex uses first describe a theoretical problem and then he reproduces the theoretical problem with real world scenarios.

Alex also outlines in this chapter some considerations for choosing the correct Isolation level for your business needs.  Pros and Cons are weighed and usefulness of the Isolation level is discussed.  Alex also offers up the notion of using a query hint to issue a table lock when querying the database.  Due to the reciprocal effects Alex also states that it is almost never really an option.

Alex goes the extra mile in this chapter by discussing how we can minimize deadlocks.

In the end Alex makes a recommendation as to which Isolation level he would use.  Check it out and see for yourself.

Defensive Db Programming Chapter 08

Comments: 1 Comment
Published on: November 11, 2010

We are slowly getting to the end of this book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.  We are now discussing chapter 8 and it is a good chapter.  This chapter delves into Error Handling.

While the use of TRY…CATCH certainly is the best way to handle errors in T-SQL, it is not without difficulties. (p. 259)

In this chapter, Alex discusses some of the problems associated with error handling and demonstrates how to resolve some of those problems.  We will also see how to use XACT_ABORT.

The first key to error handling is to prepare for unanticipated failure.  That is the crux of error handling.  We anticipate certain behaviors and code for that, anything else needs to be handled through error handling so we can see what has happened.  Code sometimes fails, and queries against the data can also fail on occasion – we need to plan for those failures.

When dealing with data modifications there are occasions when using explicit transactions is necessary due to the nature of the data modification – we would want to ensure that any and all changes to the data were rolled back.  There are other times when we would want to ensure that processing halts immediately when an error is encountered.  In such a scenario, XACT_ABORT comes in useful.  Note, however, that this setting is probably better set explicitly ON or OFF depending on the needs.

A recommendation in this chapter is to use client-side error handling for all but the simplest of error handling.  Error-handling is far more robust in other languages than it is in TSQL.  This is a recommendation that I support.  Error handling in TSQL has become better over time (i.e. with the addition of the TRY…CATCH) and is quite useful for some degrees of error handling.  One particular realm where error handling in TSQL is lacking is in the area of Code-Reuse.  The TRY…CATCH must be rewritten for each stored procedure for which you wish to enable error handling.

As always, this chapter covers some of the gotchas of error handling in TSQL and gives examples on how to implement as well as circumvent some of the problems.  Check it out!!

Blogger Awards

A few days ago I wrote about an election that was taking place (ok so it was a week ago at this point).  That election was the SQLServerpedia Awards.  My first post on that election can be found here.  I had wanted to write another post about the election and make a whole bunch of campaign promises – to help make it feel political.  One big difference is that I was going to run a positive campaign and just talk about what I would do if I were to win.

Well, I won in one of the categories and I never wrote that campaign promises post.  So, I will just tell you what I would have promised and then maybe do some of those things.

  1. I will display the trophy in my cubicle at work
  2. I will support my fellow bloggers as much as I can just as I have done in the past.  There are some exceptional bloggers out there.
  3. I will continue to work with SQL Server
  4. I will continue to blog!!!!
  5. I will also finish my series on the book review for which I won.  Then I will progress to the next book or two and write reviews for them as well.
  6. I will continue to provide posts to SQLServerPedia

Thanks to all who voted for me.  I appreciate the support.  I recommend checking out the other books in the Book Review category.  I also recommend that you read the Blogs of the other people that were nominated for the SQL Choice Awards/ SSP Awards.

In short, this award is great and I really appreciate it.  In the grand scheme of things I am happy to have received it and rest assured that it doesn’t change my blog or me – except that I will post a badge on my blog.  We also had many excellent DB professionals and bloggers who won in various other categories.

Thanks SQLServerpedia for the awards and putting on these awards for the community.

Defensive Db Programming Chapter 07

We are here again after another …um long lapse in time … and ready for another episode in this series.  Today we get to talk about chapter 7 in the book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.

Advanced Use of Constraints

This chapter ties in well with chapters 4 and 6.  You may want to review those chapters as you read this chapter.  There were problems raised in those prior chapters and now they are being discussed at length in this chapter.  First order of business is to revisit the ticket tracking system that was discussed in chapter 4.

Ticket Tracking

The system was left with business rules implemented via triggers in chapter 4.  We are now going to explore using constraints in lieu of the triggers to enforce the business rules.  For sake of testing, the environment should be reset and a new environment built.  Alex provides all of the scripts to do so.  As Alex progresses through this example he lays out the assumptions and some of the problems with those assumptions.  Through a combination of Indexes and constraints.  Another item that was implemented was the use of ON UPDATE CASCADE to help ensure the business requirements could be met.  Alex talks about the implications on performance and the reasons for implementing the cascade update.  Make sure you read this chapter to find out more.

Inventory System

There is a substantial amount of discussion about a new example in this chapter.  That discussion revolves around the Inventory System and several use cases.  I recommend reading the chapter.  I am not going to discuss it beyond that here in this post.  There are a great collection of scripts to run through the example.


In this chapter Alex shows us how a combination of constraints can enforce complex business rules.  The use of constraints should be included as a viable option when trying to enforce business rules.

Once again, go get this book and read it.  It is well worth the read.

Defensive Db Programming Chapter 06

Comments: No Comments
Published on: August 30, 2010

We are here again after another week and ready for another episode in this series.  Today we get to talk about chapter 6 in the book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.

Common Problems with Data Integrity

Alex explains to us that SQL server has built-in tools to help maintain the integrity of the data.  He quickly lists out three areas that can be useful in maintaining Data Integrity, as well.

  1. Applications
  2. Constraints
  3. Triggers


Data integrity can be enforced from the application.  It should be noted that this could be an inefficient method of ensuring the integrity of the data.  If you try to do it from the application, be prepared for a little pain.


It is well known that using constraints is the most reliable way to enforce data integrity rules.

Using constraints in the database is helpful and quick.  If data does not meet the constraint criteria, we can prevent it from being added.  There are different kinds of constraints and one should become familiar with the various constraints.  In this section, Alex also demonstrates the differences for the three states of constraints.  It is important to understand the state of the constraint to ensure the integrity of the data is at the level your expect.

There is a section devoted to UDFs in this chapter as well.  The section is within the Constraints section and should be read.


The chapter is concluded with the discussion on using Triggers for enforcing data integrity.  Triggers are suitable in some areas where a constraint would not be suitable for enforcing data integrity.  Triggers do have their own problems that should be considered too though.  Some of the problems are with how the trigger was coded and some are inherent with the use of triggers.  It is well worth learning about these limitations by reading this section as well.  Test the samples provided and implement some of the suggestions in your triggers.


Alex concludes this chapter with the following statement:

Data integrity logic in the application layer is too easily bypassed. The only completely
robust way to ensure data integrity is to use a trusted constraint. Even then, we need to
test our constraints, and make sure that they handle nullable columns properly.

We also have a nice set of reminders of what to include in our assumptions to be tested.  The short of it, check your constraints and test as many scenarios as necessary based on your assumptions.

Defensive Db Programming Chapter 05

Comments: No Comments
Published on: August 24, 2010

We are here again after another week and ready for another episode in this series.  Today we get to talk about chapter 5 in the book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.

The title of the chapter this week is “Reusing T-SQL Code.”  In this chapter, Alex covers the topics of:

  • copy and paste code
  • proper code reuse
  • View use to encapsulate simple queries
  • UDFs and parameterized queries
  • Performance issues with UDFs
  • Business logic through filtered indexes, triggers and constraints

I am going to skip the discussion on copy and paste of code and the potential problems related to that.  I think that should be highly obvious.  I think code reuse should also bring up some obvious ideas of why to do it and how it can make your code better.  Simple queries in views is also pretty straight forward.  For these first three topics, check out what Alex has to say about them in his book.

UDFs and parameterized queries

When it comes to code reuse, it is typically easier to to reuse parameterized queries through an inline UDF.  That’s not to say that it cannot or should not be done through a stored procedure.  For many scenarios and requirements, it would be easier to do a parameterized query through a function.  Alex gives examples of some of the limitations such as sorting, Insert/Exec, and nesting.  There are of course limitations that should be considered with inline UDFs.  Alex has those listed in his book.

Performance issues with UDFs

Alex makes a very good point in this section.  When trying to reuse code, we have to pay attention to performance.  Poorly written UDFs could have serious impacts on the performance of the code and that should be a great concern.  To demonstrate this, Alex has listed out an example covering the bases.  Check out his code samples and test it.

There is also another great statement in this section.

Blanket statements do not belong in database programming.

Keep that in mind.  It is very difficult to make a blanket statement and have it apply in all cases.

The final section is covering the topic of when to use Filtered Indexes, Constraints, triggers and stored procedures.  Proper use of each of these can help to enforce business logic.  It should be noted that the use of these will depend on requirements and database design.

Having a single source of the code will help to prevent unexpected bugs.  It will also help to save time on development and documentation.

«page 1 of 2

February 2017
« Jan    


  • @Ko_Ver: @NickyvV no, no special tricks in the package :) #sqlhelp
  • @NickyvV: @Ko_Ver I've seen that last week that the Job (step) succeeded, but it actually failed with an error #sqlhelp
  • @NickyvV: @Ko_Ver Is FailParentOnFailure set to false on a task/container maybe? #sqlhelp
  • @Ko_Ver: When I start an Agent job, the "job running" pop-up says there's an error. But SSIS catalog and Agent history says jobs succeed. #sqlhelp
  • @DbaMayor: #sqlhelp I'm seeing LCK_M_U and LCK_M_RS_U waits on sec readonly replica with only reporting queries...anyone faced similar issue?..thnks
  • @TrustMeImADBA: Prob not a good question for #sqlhelp but what is the chances time-service changing the time 23 sec forward made my sql cluster failover?
  • @zippy1981: Seem to be that I need to install SSMS vNext Verifying now #sqlhelp
  • @ChrisAdkin8: @YaHoznaDBA you should also compare avg writelog waits to avg tran log write stall,the two are not necessarily foced to be the same #sqlhelp
  • @toddkleinhans: #sqlhelp Will try using lock priority. Totally forgot about that. Thanks!
  • @DBArgenis: @toddkleinhans Compatibility mode is for T-SQL constructs and Query Optimization, nothing else. #sqlhelp

Welcome , today is Tuesday, February 21, 2017