Defensive Db Programming Chapter 06

Comments: No Comments
Published on: August 30, 2010

We are here again after another week and ready for another episode in this series.  Today we get to talk about chapter 6 in the book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.

Common Problems with Data Integrity

Alex explains to us that SQL server has built-in tools to help maintain the integrity of the data.  He quickly lists out three areas that can be useful in maintaining Data Integrity, as well.

  1. Applications
  2. Constraints
  3. Triggers


Data integrity can be enforced from the application.  It should be noted that this could be an inefficient method of ensuring the integrity of the data.  If you try to do it from the application, be prepared for a little pain.


It is well known that using constraints is the most reliable way to enforce data integrity rules.

Using constraints in the database is helpful and quick.  If data does not meet the constraint criteria, we can prevent it from being added.  There are different kinds of constraints and one should become familiar with the various constraints.  In this section, Alex also demonstrates the differences for the three states of constraints.  It is important to understand the state of the constraint to ensure the integrity of the data is at the level your expect.

There is a section devoted to UDFs in this chapter as well.  The section is within the Constraints section and should be read.


The chapter is concluded with the discussion on using Triggers for enforcing data integrity.  Triggers are suitable in some areas where a constraint would not be suitable for enforcing data integrity.  Triggers do have their own problems that should be considered too though.  Some of the problems are with how the trigger was coded and some are inherent with the use of triggers.  It is well worth learning about these limitations by reading this section as well.  Test the samples provided and implement some of the suggestions in your triggers.


Alex concludes this chapter with the following statement:

Data integrity logic in the application layer is too easily bypassed. The only completely
robust way to ensure data integrity is to use a trusted constraint. Even then, we need to
test our constraints, and make sure that they handle nullable columns properly.

We also have a nice set of reminders of what to include in our assumptions to be tested.  The short of it, check your constraints and test as many scenarios as necessary based on your assumptions.


Comments: 2 Comments
Published on: August 25, 2010

Not a sound from the pavement

Have you ever come across a SQL query that used to run faster?  Has that query just recently begun to run considerably slower?

I just ran into one of those situations.  I had a query that should have been running in seven minutes or less (string parsing nearly 4 million records).  When it started running much slower, I decided to pull out some monitoring scripts and start checking a few things on the server.  I wanted to check for the usual suspects first and then find what was causing the alleged problems.

If you touch me you’ll understand what happiness is

When problems arise in a database, it is good to have a toolbelt of scripts available to help in quickly “touching” / assessing the problem.  In this case, I first turned to the following:

[codesyntax lang=”tsql”]


My goal from this query is to quickly assess if the query is still progressing.  For the query in question, I should see that the query is steadily writing to the database.  I can also see really quick the status of the query.  Running this in conjunction with sp_who2 is helpful in identifying, at a high level, issues that may be occurring.  Another possible query to run in this kind of situation would be sp_whoisactive by Adam Machanic (Blog | Twitter), or the who2 script I wrote about here.  The idea, is to get a quick glimpse at this point to see if there is a potential issue.

In this case, I found nothing alarming or noteworthy from the initial queries.  Other queries could be run at this point to try and determine if there is a pressure point somewhere on your resources (memory, cpu).  I decided to take a quick glance at task manager just to get a quick visual of the server.  Looking at it, I saw that I had no memory available – all 96 GB were in use.  I proceeded to check the max memory setting on SQL server.  As expected, it was still set at the default setting.  A little further prodding showed that the SQL server process was consuming most of the memory available to the server.

I changed the max memory to 64GB and immediately saw improvements.  In task manager the effect could also be seen, as shown to in the pic to the left.  NO server restart or service restart was required.  This change took effect immediately.  The poorly performing query was soon performing back to normal.

I can smile at the old days

This should serve as a reminder to check your server settings from time to time and ensure that they are appropriately set.  This little change (and that was the only change), allowed my query to return to normal processing which was > 5 times faster than the problem range.  Happy troubleshooting.

SSSOLV: Twitter Compliant

Comments: No Comments
Published on: August 24, 2010

This is just a short blast about the Las Vegas User Group called SSSOLV (Web | Twitter).  I am not certain why it took so long for us to do this, but we have finally got the UG on a twitter account.  We plan to use this medium to help blast UG news.

SSSOLV is now @S3OLV


Comments: No Comments
Published on: August 24, 2010

I was doing a little catching up on some blog reading today and came across a new post from Adam Machanic (Blog | Twitter).  The post is titled Who Are You and is asking for input from the readers.  Quite frankly, I think it is a great idea.  Adam is asking for people to do a little stand up and tell everybody a thing or two about themselves.  He wanted that info left in the comments, but I thought it would be better to write a post on the topic and link back to it.

First things first.  Why is this a good idea?  Well, it helps to know the readers as Adam said.  There truly is a nice bit of satisfaction when somebody leaves a comment on your blog.  It is even better when you feel like you know that person.  Better yet is to know that there are people you know that are reading your blog – whether or not they comment.  There are plenty of people (judging from the comments) that Adam knows and that read his blog but may not leave a lot of comments.  It is also really helpful to know the interests of the readers and try to put out some stuff that may be of interest to them.  I see this being of particular help when there is a little writers block.

What about me?

My name is Jason Brimhall and I have been doing database stuff for a little over 10 years.  My current title at my employer is Database Architect.  I am more than a database architect – I get to wear many hats.  I get to do the Architect stuff, administrator stuff, warehouse stuff, and occasionally I get to write a stored proc or two.  I have been blogging now for just about 9 months.  I have administered 4500+ servers as well as 1TB+ databases.  I have worked on SQL 6.5 – SQL 2008 as well as a spattering of Oracle and MySQL (don’t quiz me on those).  I live in Las Vegas and am the Vice President of the local Users Group (Web | Twitter ) (which reminds me that I need to get that UG twitter account notice out).  Outside of work, I like to spend time with my children as well as play basketball and run.

Edit:  I forgot to tell you what my twitter handle is…@sqlrnnr

Defensive Db Programming Chapter 05

Comments: No Comments
Published on: August 24, 2010

We are here again after another week and ready for another episode in this series.  Today we get to talk about chapter 5 in the book by Alex Kuznetsov (Blog).  You can find more on this series by looking here.

The title of the chapter this week is “Reusing T-SQL Code.”  In this chapter, Alex covers the topics of:

  • copy and paste code
  • proper code reuse
  • View use to encapsulate simple queries
  • UDFs and parameterized queries
  • Performance issues with UDFs
  • Business logic through filtered indexes, triggers and constraints

I am going to skip the discussion on copy and paste of code and the potential problems related to that.  I think that should be highly obvious.  I think code reuse should also bring up some obvious ideas of why to do it and how it can make your code better.  Simple queries in views is also pretty straight forward.  For these first three topics, check out what Alex has to say about them in his book.

UDFs and parameterized queries

When it comes to code reuse, it is typically easier to to reuse parameterized queries through an inline UDF.  That’s not to say that it cannot or should not be done through a stored procedure.  For many scenarios and requirements, it would be easier to do a parameterized query through a function.  Alex gives examples of some of the limitations such as sorting, Insert/Exec, and nesting.  There are of course limitations that should be considered with inline UDFs.  Alex has those listed in his book.

Performance issues with UDFs

Alex makes a very good point in this section.  When trying to reuse code, we have to pay attention to performance.  Poorly written UDFs could have serious impacts on the performance of the code and that should be a great concern.  To demonstrate this, Alex has listed out an example covering the bases.  Check out his code samples and test it.

There is also another great statement in this section.

Blanket statements do not belong in database programming.

Keep that in mind.  It is very difficult to make a blanket statement and have it apply in all cases.

The final section is covering the topic of when to use Filtered Indexes, Constraints, triggers and stored procedures.  Proper use of each of these can help to enforce business logic.  It should be noted that the use of these will depend on requirements and database design.

Having a single source of the code will help to prevent unexpected bugs.  It will also help to save time on development and documentation.

Star Struck

Comments: No Comments
Published on: August 17, 2010

Have you ever run into an error and been puzzled as to why that error occurred?

Recently I have been working on a project to backfill a development and QA environment.  These environments are essential in any database environment to help ensure the production database will continue to work after changes have been made.  These environments existed once upon a time but had been sorely neglected and process was not followed.

For this process, I am using some software to help generate diff scripts between the environments.  Once the script has been generated I will execute that script in the appropriate environment (DEV or QA).  I am doing this process for all tables, procs, views, and functions.  This is also done iteratively database by database.


While performing this routine, occasionally an error might be encountered due to an order of processing that may need to be performed.  In cases such as that, I would typically alter my process to account for this error.  Usually that is not an issue if the tables are first created, then the views, and then the procs and functions.  Other times, the create/alter script must be evaluated to determine what is causing the failure.

One of these failures that caused me a lot of angst was producing the following error in  a few stored procedures.

That error message is fair enough to evaluate.  Based on the error one should expect that the stored procedure contains one of those keywords.  I picked a stored procedure out of the mix that was generating this error and checked the code for it.  Looking through the entire proc, there was no such instance of those keywords.  This was temporarily puzzling.  I continued to investigate and found that there was a view being used in that stored procedure.  I decided to double-check the view and see if it might be the cause.  Inside the view I see something like the following:

[codesyntax lang=”tsql”]


At first sight, I see a few things I don’t like but didn’t immediately pick up on the table in the third select.  I saw the table name and moved on thinking it must be ok.  Then I decided to sanity check more precisely what was there and saw that it was pointing to the same table name in a different database.  Low and behold in the other database the table did not have the same number of columns.  These tables are all pretty wide at 100+ columns each.  The designer thought that every single column from each of these tables was needed and decided to take a shortcut.  Had the columns been explicitly named, I would have gotten a more useful error message, in this case, such as the following:

That little change in message could save time when troubleshooting a problem such as what I have described.  Granted, it does not tell me which database or table but at least I know which column is missing.  This is one little reason why I like to explicitly name the columns in my Select statements.

Another reason why I like to use the column names instead of a ” * ” is that I have seen queries improve dramatically in performance speed when compared.  In this little case, even though there are 100+ columns from each table to list out and all of the data would be returned from each of these, I see a 700 ms improvement in return time on just the “top 1000” records.  When I bump that up to the full result set required by the view, I tire of waiting for the results from either query.  Both are terrificly slow (takes longer than 15 minutes to return the full result set).

But But But…
So why do people use Select * in their code?  It is a lot easier to type than having to type all of the column names.  Because it is fewer keystrokes it would also save time when writing code.  Many times, we may also use it when trying to get a glimpse of the system really quick.  It is also oftentimes used when doing a demo.

Should it be used?  There are cases for using that kind of code such as in an “if exists” block.  There are cases where the performance impact is minimal or just as good as listing the columns so it is fine.  I use it when I am trying to do a quick and dirty statement while troubleshooting.  I don’t like to put that into code to be consumed by applications or reused.  I like to know exactly what columns are being returned by my queries and I don’t feel it necessary to return more data than necessary (and consequently consume more resources).

What are some cases you have found a select * to be better than explicitly naming the columns?

S3OLV August Recap

Tags: ,
Comments: 2 Comments
Published on: August 16, 2010

The SQL Server Society of Las Vegas meeting has come and gone once again.  This meeting is held typically on the second thursday of the month each month.  Once the meeting got under way, it was a very good meeting and people were highly impressed with the meeting.  Getting to the point where a good impression could be made was a different story.

This month was another one of those months where it seemed everything was doomed to go wrong.  Fortunately we were able to mitigate most of the problems.  The first problem we had was that the building resource we use got things fouled up and did not get us on the schedule somehow.  Fortunately, we had somebody there an hour ahead of schedule that was able to secure a room anyway.  The next problem came down to the overhead equipment.  The projector had a burned out bulb.  If the presentation were supposed to an in-person presentation, this would have been a significant issue.  Fortunately we had Grant Fritchey presenting from the east coast via livemeeting.  That worked well because we could just use a couple computers and dial up the livemeeting and still get the presentation.  I had speakers (I had planned on needing them anyway so I could stream the meeting on the overhead through my laptop).  Then we had microphone problems.  I have no real answer for how we got the mic on our end working except that after three tries through the livemeeting audio setup, it finally accepted the microphone and we were able to start interacting.

Grant gave a presentation on reading execution plans.  It was a good presentation and the participants enjoyed it.  I know from feedback of those in the building that they learned a lot and were impressed.  I recorded the livemeeting and am trying to find a location where the file can be placed.  It is too large for my site, and it is too large for the SSSOLV site for upload.  If anybody has some ideas on that, please let me know.

TSQL Tuesday 009 Roundup

Comments: 2 Comments
Published on: August 12, 2010

T-SQL Tuesday #009: Beach Time

I hope you enjoyed your time in the sun with your toes in the sand.  We are wrapping up this vacation with a little recap on what we learned.  There were some similarities and some differences amongst our vacationers.

I enjoyed reading the comments made this month and the methods employed by each person that came along for the party.

Without further ado, in order of submission, here is what people had to say.

Pinal Dave (Blog | Twitter | TSQLTuesday9)

Pinal teaches us that we need to stop thinking about the job when on vacation and that we don’t need to be a superhero.  He also tells us that we should keep the phone on, but use restraint.

Rob Farley (Blog TwitterTSQLTuesday9)

Rob states that “Just because you’re necessary doesn’t mean you’re important.”  He has a very good point with that.  Nobody should horde all of the information about his/her job, project or process.  Learn to share your knowledge and you will learn more in the process!!

Robert Davis (Blog | Twitter TSQLTuesday9)

SQLSoldier has let it slip that he is working on some top secret project at work away from work.  What I like about this is that he is getting beach time by doing this other project.  It is not necessarily vacation but it is a change and there is a change in project focus for the week long stints that he will be doing this.

Benjamin Nevarez (Blog | Twitter TSQLTuesday9)

Ben just returned from vacation in time to participate this month.  Ben takes us on vacation with him by sharing some of his pictures.  I think a good way to get ready for vacation is to busy one’s self looking at vacation spots. 🙂

Jeremy Carter (Blog | Twitter | TSQLTuesday9)

Jeremy brings up some solid points.  One of the best points is the change management piece.  Jeremy also touches on the hero mentality a bit.  I really like his closing sentence: “The real hero of the day will be missed because they are a true asset, not because the world stopped in their absence.”

Bob Pusateri (Blog | Twitter TSQLTuesday9)

Bob teaches us in his entry this month that he spends a lot of time in a science museum.  When he is out at the museum he will Delegate a team member to fill in for certain projects or aspects of work.  Bob keeps it pretty simple – Documentation and Delegation are his keys to a successful vacation.

Steve Jones (Blog | Twitter TSQLTuesday9)

Steve, with his heavy packing and all, makes sure he communicates a lot before he leaves the office.  It is fair to be available for a true emergency.  The person to whom you delegated your responsibilities should be the one to escalate to you.  Trust them to understand what a true emergency is (and hopefully you won’t have to train them further on the topic when you return).

Andy Lohn (Blog | Twitter TSQLTuesday9)

There are a few things that are new in this blog from SQLQuill.  One, I like his avatar – pretty cool.  His blog theme looks good – the theme works well for his site.  Anyway, back to the topic.  When Andy leaves for vacation he sends a meeting request to the team.  No reminder is set, just a meeting request that shows the time as available.  Then he blocks it out on his calendar as out of the office with an out of office rule set.  Then he goes through his lists and checks off everything that needs to be done (knowledge transfer, tasks, etc).

Stef Bauer (Blog Twitter TSQLTuesday9)

Stef was also on vacation as the announcement went out for this months meme.    Stef works in a small shop and is available to the manager should an emergency arise.  Stef recommends a technology silence in order to enjoy vacation – thus twitter goes bye-bye.  Stef also double-checks processes prior to leaving to make sure everything is in tip-top shape.

Josh Feierman (Blog Twitter TSQLTuesday9)

This is the first ever TSQLTuesday post by Josh.  Welcome to the party Josh!  Josh believes in project visibility (that’s a good thing).  With everything up to date in sharepoint, he feels a lot more comfortable taking off for vacation.  He also makes certain that he is unreachable via email.  He will turn off email synchronization to the Blackberry when on vacation.  I am strongly considering doing the same.

John Racer (Blog Twitter TSQLTuesday9)

John believes that a system that requires constant hand-holding by the dba are incorrectly designed.  There are tools available for making your life easier – use them.  Systems should be built with ease of maintenance in mind.  This is a valuable asset.  Just because it took you 5000 lines of TSQL to accomplish what somebody could do in 10 lines, does not make your code any better or valuable to the company.

My Entry (Twitter | TSQLTuesday9)

I talk about some recent experiences around trying to get away from work for an extended weekend.  I had a funeral to attend that interfered with deadlines for work that I needed to meet.  I also talk about Beach Time as a state of mind rather than a destination or even being vacation.

That’s a Wrap

I hope you enjoyed this vacation filled week.  Let us all wrap things up with a feast in honor of a good vacation and a toast to many great accomplishments.

I hope to see you all next month for another round of TSQL Tuesday!

Defensive Db Programming Chapter 04

Comments: No Comments
Published on: August 11, 2010

This week we will be checking out the fourth chapter of this book by Alex.  This is a continuous effort that started with my blog post on the first chapter.  In this chapter, Alex delves into a couple of features that are new with SQL 2005 that could affect your queries and alter the expected results.

The two features being described are:

  1. Snapshot Isolation
  2. Merge

In SQL 2005 we were introduced to the Snapshot Isolation features.  The default isolation prior to SQL 2005 was READ COMMITTED.  One of the primary focus points with SNAPSHOT Isolation discussed in this chapter is with regards to the usage of triggers to enforce integrity and business rules.  Again, Alex goes to the extent to show how this could be a problem, how to reproduce the problem, what the expected results should have been, and he offers up a method to overcome the potential pitfalls.

I will throw one bit of warning here though.  Read carefully through the triggers and the verbiage that Alex has used.  There is a troubleshooting trigger which Alex uses to demonstrate a point.  As a preface to the trigger he states there is an embedded query in the trigger and that it should never be rolled to production.  If one reads this quickly, they might misread that and understand it to say that a trigger should never be used in production (that is a myth out there in DBA land already as it is).  I think an inline comment in the trigger explicitly pointing out the portion that should never be used in production would be worth the effort and type space.

As we read about the Merge feature, we are reintroduced to @@ROWCOUNT and an issue that can arise in a trigger.  We have an issue arise where the MERGE can cause unwanted behavior.  I like this section of the chapter.  Alex does a good job of demonstrating this feature and has intrigued me in regards to the Merge feature.

We see some good stuff once again in this chapter on Defensive Programming.  I recommend again that you read it.

R & R

It is once again time for the blog party known as TSQL Tuesday.  I am hosting this month and wanted to also participate.  The theme is “Beach Time” and this is TSQL Tuesday #9.  Why did I choose this theme?  There are a few reasons for it.  1.)  There are numerous people in the community getting ready, or who have recently, to change jobs.  2.)  I was on the tail end of a mini-vacation / long weekend when I got the invite to host.  3.)  To be able to take a day off for that long weekend, I wanted to make sure the loose ends were wrapped up so I would not need to work over the weekend.  4.)  The song lyrics I mentioned in the Theme announcement, were stuck in my head and reminded me of vacation.  For this post, I want to cover what Beach Time means for me and some of the things I did recently in order to get a little R & R.

Beach Time

Beach Time means having peace of mind, for me.  I can’t go on vacation if there is doubt or concern about any of my projects (work, home, financial).  So, in order for me to truly have a vacation – Beach Time comes first.

I find it completely useless to go on vacation if I am going to be checking email or project statuses every 10 minutes.  There is no rest or relaxation in doing those things while I am supposed to be doing something else.  Vacation should be fun and enjoyable.  Thus, if I am to enjoy vacation, I need to do a few extra things in the office prior to leaving.

Going the Distance

When Adam (Blog | Twitter) asked me to host, I was on my little mini-vacation.  In order to get to that vacation, I had to prep myself and work for the one day absence.  I had to make sure I had built up a few things.

1.  Confidence

2.  Skills

3.  Processes

4.  Automation

Let me explain what I mean by each of these items.  I am sure it seems a bit odd to hear that Confidence and skills must be built right before one takes a vacation.

I needed to build confidence in processes and systems that were recently deployed.  This required more testing and monitoring of the processes to ensure all would be well.  I worked on transferring some knowledge of specific items to team-members.  I wanted to make sure they knew what they were doing.  I also wanted them to know that they knew what they were doing and that they could do it.  This goes hand in hand in number one, all while building skills.  With the massive changes that we were wrapping up (complete network migration), some new processes needed to be developed and implemented.  We are using new technology and software and the old methods just didn’t work anymore or were unreliable in the first place.

In addition to these items, there was the work tasks that needed to be completed.  Since we were trying to finish the project by the weekend that I needed to be out of the office, I needed to have my tasks done that much sooner.  This meant a willingness to work longer days (ok, continue working longer days)  and try to work a little faster without making any mistakes.  This also required some increased communication and visibility with regards to the tasks and effort.

I also had a feeling that something might come up on the day I was scheduled to be out of the office.  I suspected that there may be a user request for a data dump.  In preparation for this, I also built an SSIS package that would handle the most common requests from this particular user.  The package would be able to handle various different types from this user based on common requirements I had seen in the past.  A user would only need to change a few input variables and run the package.  The result as a simplified version of a process I had been using for that particular user and was something that I could hand off to a team-mate.  This automation made it so less time could be spent on teaching the team-mate what may be needed, and permits one to just execute the package.  This was something that I wanted to get done at some point in the future anyway, but this was a very good time to introduce it into the mix.  Overall time savings of this implementation could be a couple of hours a week.  Overall savings in thought process, memory tracing, note tracing, and stress was substantial.  It was well worth the day or so that I put into creating it.

“Toes in the Water…”

Now that I have been able to accomplish some very good stuff by putting in the extra effort, I can relax a bit.  By planning ahead, communicating, working extra, and training – I can have that Beach Time that I wanted.  I was able to have the peace of mind needed to really take a vacation.  Taking a break from work sometimes requires going the extra mile just before the break happens, but it is well worth it if you don’t have to think about work at all while on vacation.

«page 1 of 2

August 2010
« Jul   Sep »

Welcome , today is Sunday, March 29, 2020