Why All The Fuzz?

I really must thank Steve for his editorial on FizzBuzz.  It seemed like a really good topic to do some testing and comparison.  Then the comments started rolling in on the topic.  This provided more information and ideas to use for this article.  As trivial as it may seem, the topic brings up some very pertinent information for us in the Data Bizz.  The primary objective of this article is to compare and contrast methods and hopefully provide evidence to use one method over another.  The underlying objective being that of performance tuning your code.

A FizzBuzz is a really trivial topic to dispute.  The objectives being to weed out those that can from those that can’t at the most basic level.  However, with the FizzBuzz, the interviewer has the opportunity to get more insight into the prospective employee.  Before delving into the technical aspects, ponder the non-technical aspects first.  This sort of test will help to determine if the candidate can quickly assess requirements, is willing to gather more information, organization skills (though very minimally), mettle under pressure, and (at a very high level) character.  Keep in mind that all of these things are only a piece of the puzzle.  There still remains the technical skill, and “team fit”.

This FizzBuzz requires that one write some code to be generate a list of numbers.  The list will have multiples of 3 replaced by Fizz, multiples of 5 replaced by Buzz, and multiples of both 3 and 5 to be replaced by FizzBuzz.  Those are the base requirements.  The unstated requirements are the requirements that top tier candidates will accommodate instinctively.  These requirements are performance, scalability, maintainability, and in a set-based fashion (since this is TSQL and SQL is optimized for Set-Based coding).  Sometimes this test is implemented where the requirements state to print out the results.  For my purposes, I will take that to simply mean display the results.  The methods would be acutely different.

I will explore a few different methods that may be used in achieving the results.  Some answers are better than others.  The first is a nice long example by Gus (posted in the comments for the SSC editorial).

[codesyntax lang=”tsql”]


This was created in jest.  However, there are some important things to note in the code.  First, the use of a table variable is not necessary.  With that table variable there is an additional attribute that is unnecessary, even if one decided to use a temp table or table variable.  That may be a minor thing, but can be considered to be sloppy coding, inattention to detail, and poor use of resources.  Second, the use of a while loop to populate that table.  Third, is the use of a cursor to loop through the numbers in the table to do the comparison for the FizzBuzz test.  This script was also hard-coded with the record amounts to use.  To make it scalable and maintainable, one should parameterize those values.  These criticisms are nothing new to Gus.  It is important to note that he coded this query in this fashion intentionally, and he pointed out some of the criticisms himself.

Will this code work?  Yes it will.  It produces the following IO stats and a looping execution plan (1 plan for each iteration).

For 100 records, this takes roughly five seconds to run on my server.  100 records is a very small test and thus larger tests are needed to see about the performance and scalability factor.  Scaling up to one million records, this query was still running after six hours.

Next up is the following query:

[codesyntax lang=”tsql”]


This version loops through the numbers and assigns the value to to be printed and then prints it.  See what happens when the counter hits 15?  We end up assigning a value and then overwriting that value.  This version also just prints the result rather than displaying the query results.  For 100 records the execution is not bad.  There are no IO stats and no Execution plan.  However, to run this for the one million records, takes forty-five seconds.  If I needed a larger result set, I would also need to be careful of the Int variable.  This one would also need to have a change in order to use a variable for the number of records to build.

Next is a pretty nice looking CTE version.  There are many renditions of this one.

[codesyntax lang=”tsql”]


This query is far more optimal than the previous queries.  Though we use a CTE in this query, we are still using Procedural programming.  This is a recursive CTE.  We are also lacking in the maintainability and scalability of this code.  Of a lesser magnitude is the use of the option (maxrecursion 100).  Just a little trick that should be noted.  The recursive definition has a limiting where clause thus making the maxrecursion statement unnecessary for so few records.  Since this performs so well with 100 records, let’s proceed to the one million test.  Here is where we run into another trick that must be used.  MaxRecursion needs to be specified with a 0.  Doing this will permit the query to attain one million.  The query now takes 15 seconds to complete.  Here are the IO Stats and Execution plan.

Now that I have shared some procedural based methods to accomplish this “simple” task, let’s explore some of the potential set-based methods.  The first uses a windowing function to create our number set.

[codesyntax lang=”tsql”]


First thing to note is the reduction in code here.  The solution is very simple and runs very rapidly.  By using a cross join against sys.columns were able to create a result set.  This cross join has limitations.  Not all sys.columns tables are created equally and thus we would have to add another cross join in order to test one million records.  Also, note that the modulo was changed in the FizzBuzz test to use a modulo 15.  That was also documented in the short note at the end of the line.  Thus the alterations for the query are as follows (to test one million records).

[codesyntax lang=”tsql”]


This query will return in ~14 seconds.  It is a set-based solution.  It is not quite yet to that upper echelon though.  Some modifications need to be made to make it scalable and maintainable.  Also note that I am still employing the use of sys.columns.  There are better methods for that.  One such is to use master.sys.all_columns.  Another is to use a Numbers table.  Before going into the use of the numbers table, here are the IO stats and exec plan.

Now for the numbers table.  I will jump straight to the one million records testing.  The query is not substantially different than the previous query.

[codesyntax lang=”tsql”]


The runtime for this query is about the same as the previous query.  IO Stats and Exec Plan show a slightly different story.  Still, note that this query is not quite the scalable or maintainable query I would like to see.

As you can see, Logical reads increased, and the execution plan is slightly better.  The cost of this new query is lower and makes it a little more desirable – despite the higher logical reads.

We are now at a point where we have progressed to where we can start playing a bit.  The idea now is to fine tune and tweak what I have to see if it can be made better.  The remainder of the queries and testing will be in the next article.  There is still plenty to cover. 🙂

4 thoughts on “Why All The Fuzz?”

  1. I’m pretty sure what is coming and I hope you also use a predefined ********. Don’t want to spoil the surprise.

  2. I notice you say “Doing this will permit the query to attain one million.” While this is great info, it is not the same as stating that it is limited only by the resources of the machine, if that is true. I tried 1.2M and it worked fine.

    Thanks for the article!

  3. Thanks Reid. Valid point. The wording could be more accurate there. I only meant it in the realm of performing our 1 million row test. Limitations of the machine will have an impact, and I also did not try to test it beyond 1 million. Why? No specific reason other than 1 million is where I typically test up to.

  4. Pingback: SQL Server Central

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.