Can you partition a temporary table?

Reading that title, you might sit and wonder why you would ever want to partition a temporary table.  I too would wonder the same thing.  That withstanding, it is an interesting question that I wanted to investigate.

The investigation started with a fairly innocuous venture into showing some features that do apply to temp tables which are commonly mistaken as limitations (i.e. don’t work with temp tables).  To show this I set off to create a script with reproducible results to demonstrate these features.  I have included all of those in the same script I will provide that demonstrates the answer to the partitioning question.

In fact lets just jump to that script now.

In the beginning (after dropping objects if they exist), I start by creating a temp table that has a couple of mythical limitations.  These mythical creatures are that temp tables can’t have indexes or that they can’t have constraints.

In this script, I show that a temp table (#hubbabubba) can indeed have indexes created on it (clustered and nonclustered).  I also demonstrate the creation of two different kinds of constraints on the #hubbabubba table.  The two constraints are a primary key and a default constraint.  That stuff was easy!!

To figure out whether or not one could partition a temporary table, I needed to do more than simply create a “test” temp table.  I had to create a partitioning function and a partitioning scheme and then tie that partition scheme to a clustered index that I created after table creation.  Really, this is all the same steps as if creating partitioning on a standard (non-temporary) table.

With that partitioning scheme, function and the table created it was time to populate with enough random data to seem like a fair distribution.  You see, I created a partition function for each month of the year 2014.  To see partitioning in action, I wanted to see data in each of the partitions.

That brings us to the final piece of the whole script.  Kendra Little produced a script for viewing distribution of data across the partitions so I used her script to demonstrate our data distribution.  If you run the entire script including the data distribution segment at the end, you will see that there are 13 partitions with each of the monthly partitions containing data.

The distribution of data into the different partitions demonstrates soundly that partitioning can not only be created on a temporary table, but that it can be used.  As for the secondary question today “Why would you do that?”, I still do not know.  The only reason that pops into my mind is that you would do it purely for demonstration purposes.  I can’t think of a production scenario where partitioning temporary data would be a benefit.  If you know of a use case, please let me know.

Indexing Just Got Easy

Tags: , ,
Comments: 1 Comment
Published on: April 1, 2011

As a database professional one of the things that we should be familiar with is the use of indexes.  In SQL Server an index helps to improve query performance – when properly implemented.  There are some penalties in play when an index is created.  These penalties are typically associated with the insert update and delete operations.  Other penalties might revolve around a poorly defined index that may cause more work to be done for a given query.  As a database professional, it is within our best interest to strike a harmony between benefit and penalty through analysis and tuning of these indexes.  We want the benefits to offset the penalties for the index to be of worth to the query and system.

In SQL Server we have been given an array of tools over time to help us with this effort.  Microsoft gave us the Index Wizard in SQL 2000.  Then we got a decent upgrade with the Database Tuning Advisor in SQL 2005.  Microsoft has been attempting to continue to make a better more agile product for the database professional so s/he can spend time doing other equally important duties.  It appears that we now have a new tool to help us do just that.

First, let’s recap the predecessors.

Index Tuning Wizard

This tool was born in SQL 2000.  The intent was to give the Database Professional a view into the big picture.  Not just look at one index for one query isolated from other tables, views, and queries.  When used properly, this tool could give the professional a decent idea of the indexes that were needed throughout the system.  A common faux-pas was a blanket acceptance of the proposed indexes – good or bad.  Yes, this tool did recommend many bad indexes.  You can learn more about the tool if you would like, from here.

Database Engine Tuning Advisor

This tool was introduced by Microsoft with SQL 2005.  DTA made significant advancements over the predecessor.  The main purpose is much the same as the Index Tuning Wizard.  DTA did produce more reliable results (IMHO) and seemed to be more efficient at doing it.  Unfortunately, there was still the possibility of getting some bad recommendations.  And yes, the bigger issue is that there was blanket acceptance of all indexes.  You can learn more about this tool here.  This tool is available in SQL 2005, 2008 and 2008 R2.

GAIHA (pronounced like GAIA / GAYA)

GAIHA is Genetic Algorithm Index Heuristic Automation.  This name was chosen for multiple reasons, one of which is the pronunciation similarity with Gaya.  For some Gaya infers “enlighten.”  The hope for this product is to really help the database reach that next level and appear to be enlightened because it runs that much more seamlessly and effortlessly.

Rumor has it that GAIHA is on the docket to be released with a future version of SQL Server.  This tool has the same prime directive as the predecessors.  As the name implies though – it’s like DTA on steroids.  Furthermore, it is entirely automated.  Just like you can enable auto-update of statistics you will be able to enable GAIHA.  Much the same as new statistics get created based on data usage and data volume – indexes will auto-create based on query usage.

The automation of GAIHA will use advanced algorithms and heuristics.  Think of the heuristics as the recording of usage patterns and the system learning how the system is used over and over again.  Then it triggers an algorithm to run hundreds of generations of possible indexes and then picking one that would be the best fit.  Your query that ran in ten seconds today will run in ten milliseconds tomorrow.  What’s more, all you did was use the system.

Is there the potential for bad indexes – yes.  Would you still be able to create your own indexes – yes.  If enabled, would this be entirely maintenance free – no.  I don’t imagine it to be entirely maintenance free.  Take the auto update on your statistics for example.  Though you may have that enabled, you still have to do manual statistics updates on occasion.  I envision the same kind of behavior in the future.


There you have it.  The tools keep getting better and better.  This is true especially when the next generation tool is entirely fictional.  There is no such tool and it was completely derived from imagination.  Would it be nice to have – sure.  I envision the DTA being many magnitudes better if it were recreated to be similar to a product by Quest called TOAD.  I liked the feature in TOAD from many moons ago that allowed me to input a query and see it refactored 100 different ways with varying performance gains.  This would be extremely useful for many Database Professionals.

If you would like to see a genetic algorithm at work in SQL Server, check out this blog by Gail Shaw.

Happy April Fools.

A Haunting TSQL Tuesday Tale

Comments: 1 Comment
Published on: October 11, 2010

Time for the ghouls and goblins to come out of the woodwork once again for another tale of deception and tyranny.  OK maybe not tyranny but definitely a haunting tale is to be told this month while trick-or-treaters plan their routes and mend their costumes.  We have SQueeLing experts suiting up for the adult party known as TSQL Tuesday.

This month the vampires, ghosts, and zombies are all breaking it down at Sankar Reddy’s place (Blog | Twitter).  The ghoulish stories to be shared are to be myths from the underSQLworld.

Truncating a Table is Unrecoverable

Have you heard this one in the past?  If you truncate a table, you cannot undo the action like you can with a delete.  What is the difference between the two methods to remove data from a table?

When you truncate a table, the pages are deallocated with minimal locking of the resources.  When you use the delete statement, the data is deleted row-by-row with more locks.  Everybody knows that a delete can be rolled back.  Many people believe that a Truncate cannot be rolled back.

Well, at this point we really need a table to test.  So let’s create a table and perform a few simple tests.

[codesyntax lang=”tsql”]


With the test table ready to go, we will begin with the baseline demonstration on the Delete command.  Here is the script to demonstrate that a Delete can be rolled back (but we already knew that).

[codesyntax lang=”tsql”]


Pretty simple and straight forward – baseline is proven with that script.  The premise that a Delete can be rolled back has been established.  Now, how about that truncate statement?  For the truncate statement, I will use the same simple script, substituting the appropriate commands in this script.

[codesyntax lang=”tsql”]


When this script is executed, we see that indeed the data is recoverable.  Here are the results I get from the truncate and associated rollback script.


I think this is one of those myths that is frequently floated throughout the SQL world due to another myth that is related to the Truncate statement.  It is often explained that you can’t recover a truncate because the truncate is not logged.  That is also obviously wrong (if it weren’t logged, how could you recover the table without a restore?).  The myth about Truncate being non-logged has already been debunked (as has this one – in a different series) by Paul Randal in his Myth-A-Day series.  The key to this is that the Truncate is logged and that you wrap the command in a transaction – and only commit when you are ready.

page 1 of 1

September 2017
« Jun    


Welcome , today is Saturday, September 23, 2017