T-SQL Tuesday #016: Aggregates and Statistics

Comments: 1 Comment
Published on: March 8, 2011

Data Size Collection and Analysis

By now you should be well acquainted with this phenomenon we call TSQL Tuesday.  This party is being hosted this month by Jes Schultz Borland (Twitter Blog).  She has challenged us to write something about aggregations and aggregation functions in SQL Server.

I stewed on this topic for a while trying to figure out something that would be relevant yet a little unique.  I think that is the real challenge – finding some application of the topic that may be somewhat unique or at least informative for somebody.

It dawned on me finally that I already have a topic in queue waiting to be written.  It was supposed to be a follow-up to my entry for last month.  (You can read that entry here.)  I concluded that entry with an admission that I hurried through the article to get it done in time.  Well, I was hoping to find the time to write the rest of my process – but now it fits quite well with this months theme.


Quickly, let’s recap what I did in that post and then I will proceed from there to tie this month to last month (as far as TSQL Tuesday goes).

In last months entry, I shared a script (an ugly one) that I created to cycle through all of the tables and columns of a database to get me some relevant data concerning the size of the data in my tables.  Well, actually I took a 2% sample of that information so I could run further statistical analysis.  The end goal was to have relevant data from an existing system and the R&D databases to create appropriate data sizes in the new database prior to releasing it to production.  With all of that data aggregated into a staging table, I was ready to being the next phase.

One thing I did not mention in that prior article was the creation of another table for this aggregation process.  I didn’t mention it because it used much the same process (though considerably faster because it didn’t do the same thing).   This table had a prime objective of collecting the max length of each column of each table of each database.  The structure is simple:

[codesyntax lang=”tsql”]


I mention this table now because I will be using it in my final aggregation.

Statistics and Aggregation

For my data analysis and trending, I wanted to find a simple distribution across quartiles.  A quartile is: One of the three numbers (values) that divide a range of data into four equal parts.  A quartile is used in statistical analysis and is commonly a part of a box plot.  Other statistical values that work well with the quartile include the MAX value.  Since I had usable data to be able to produce my quartile ranges, I used the following query to further aggregate and even used a function that provides the quartile.

[codesyntax lang=”tsql”]


From this script, you can see that I am taking data from the first two staging tables and dumping it into a third table.  With this query I am taking advantage of the NTILE() function that is available in SQL Server (2005 and up).  And since my data that has been accumulated is for more than one table, more than one database, and more than one column – I needed to partition that function based on those attributes.

With this data now available, I could see the trend of the data for any data field that may have been collected.  For instance, if I wanted to figure out the proper size (based on current data) of the phone number in my new database, I could now query the DataStats table like this:

[codesyntax lang=”tsql”]


I would then be able to determine where that field exists and the distribution of data across the quartiles and in comparison to the max data size for that field.  This helps to more intelligently assign a data size to fields based on existing data.  I could quickly ascertain that most of the data is within the second quartile (for instance) and that I have few outliers in the third and fourth quartiles and maybe an extreme case where the max is way out of scope in comparison.  At this point I could make an educated judgement call as to an appropriate size based on distribution, outliers and risk.


This exercise was a particularly challenging one.  It was challenging due to the desire to create quartiles for analysis.  I had wanted to break it down into Standard Deviations for further analysis (and still may).  This is highly useful when in the R&D or development phase.  I wouldn’t run the query from the first post on a production system because it is a long running process and can be resource intensive (I need to optimize it more).  This kind of script can really help to get better acquainted with the data as well.  I learned a lot by doing this and am looking forward to how I can improve upon it.

1 Comment - Leave a comment
  1. […] Brimhall (B | T) T-SQL Tuesday #016: Aggregates and Statistics. First, I learned something new: what a “quartile” is. And he used OVER – a […]

Leave a comment

Your email address will not be published. Required fields are marked *

March 2011
« Feb   Apr »


  • @SQLSoldier: @jonleelockwood I don't understand the question. Do you mean how long will SQL wait for an IO request before it times out? #sqlhelp
  • @jonleelockwood: #sqlhelp what is the max latency for sql 2012 to think tempdb is missing? Thanks.
  • @sqL_handLe: #sqlhelp Anyone monitor ESXTop migrations/sec, switches/sec, wakeups/sec for #SQLServer on VMware? Seems like lotta guesswork with %Ready.
  • @A_Bansal: No 64 GB. So which 32 GB Mobile workstation do u recommend? #sqlhelp
  • @Lee____Cam: I've managed to drop my AG now and recovered my do but it won't allow the recreation of the AG. Looks like it's failing at the FCM #sqlhelp
  • @Lee____Cam: I have a 2 note always on ag, both showing not synchronizing/recovery pending. I can't get them online or remove the AG. Any ideas #sqlhelp
  • @SQLSoldier: @mvelic Yes, but then I pointed out that they were using nolock and SSIS isn't. The matching records were not committed. #sqlhelp #TrueStory
  • @mvelic: It's just maddening because this lookup has *always* worked in the past. It's just now deciding to not recognize matches. #sqlhelp
  • @mvelic: Has anyone just seen an SSIS Lookup fail to make matches? You know the matches exist, but it doesn't connect them and it fails? #sqlhelp
  • @banerjeeamit: @MattPgh No. Current processing report is not visible. This is visible in RunningJobs table but not the stats breakdown. #sqlhelp

Welcome , today is Saturday, October 10, 2015