TSQL Challenge 63 – Update

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: November 14, 2011

If you recall, I like Sudoku.  I even posted a script for solving it via TSQL.  I went so far as to enter my script into a TSQL Challenge.  That all started way back in August.  Today, I have an update!!

I was notified this morning from BeyondRelational.com that I have earned a new badge.  Cool, what’s the badge?  I clicked the link and it took me to this badge.
Huh?  I’m a winner of the SQL Sudoku Challenge?  Awesome!

Looking it over, I am winner #3.  This means I could have done better with my solution.   And looking at the other solution stats, it appears I will need to find time to see what the others did to make their solutions go sooooo fast.  I have some learning to do – woohoo.

So, now that means I need to post my solution.

[codesyntax lang=”tsql”]



Sadly, that is not the most recent version of the script that I had.  I had intended on submitting this version, which is still slightly faster.

[codesyntax lang=”tsql”]


Still, I am certain that (without having looked at the other winning solutions) this is not on par with the best solutions.  And I have a lot to learn.

Column Level Permissions

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: September 19, 2011

Did you know that you can grant permissions down to the column level in SQL Server?   Well, if you didn’t know that – you do now.

It is actually rather simple to grant permissions at the column level.  This can be demonstrated by the following script.

[codesyntax lang=”tsql”]


If you want to check out more on that syntax, read here.

And then…

Why is it important to know that you can do this?  Well, it is quite possible you have some of these permissions already in place.  It is possible you may have inherited something like this.  Just maybe there is a business requirement requiring that certain users or groups only have access to certain data within certain columns.

That brings up a new problem then.  How do you find out what columns have specific permissions applied to certain users?  Well, that is actually pretty straight forward.  We can query the system views and determine column level permissions.

[codesyntax lang=”tsql”]


The previous query is a really simple version of how to find this information.  As you can see, I am simply returning the UserName, TableName and ColumnName along with the permission in effect on that column.

You should also be able to see that the mapping between these system views is pretty straight forward as well.  Major_id maps to object_id and column_id maps to minor_id.


This query can be of good use to determine permissions in place for columns in tables within your database.  Furthermore, you can even use this query to simply test your curiosity as you check to determine what has been put into effect in the databases you manage.

There are more complex methods to determine these permissions.  With there being more complex methods, I am sure there are also some easier methods.  Let me know what you do to query these permissions.

Send DBMail

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 15, 2011

With SQL Server 2005, Microsoft improved the methods available for DBAs to send email from SQL Server.  The new method is called Database Mail.  If you want to send emails programmatically, you can now use sp_send_dbmail.  You can read all about that stored procedure here.

What I am really looking to share is more about one of the variables that has been introduced with sp_send_dbmail.  This parameter is @query.  As the online documentation states, you can put a query between single quotes and set the @query parameter equal to that query.  That is very useful.

Why am I bringing this up?  Doing something like this can be very useful for DBAs looking to create cost-effective monitoring solutions that require emailing result sets to themselves.  I ran across one scenario recently where a DBA was looking for help doing this very thing.  In this case, the query was quite simple.  He just wanted to get a list of databases with the size of those databases to be emailed.

Here is a quick and dirty of one method to do such a thing.

[codesyntax lang=”tsql”]


As I said, this is a real quick and dirty example of how to send an email with query results.  The results of the query in the @query parameter (in this case) will be in the body of the email.  A slightly modified version of that first solution is as follows.

[codesyntax lang=”tsql”]


This is only really slightly modified because I took the guts of sp_databases and dumped that into this query.  The modification being that the remark column was removed.  Why do this?  Well, to demonstrate two different methods to get the same data from the @query parameter.  We can either pass a stored procedure to the parameter, or we can build an entire SQL statement and pass that to the parameter.

This is just a simple little tool that can be used by DBAs.  Enjoy!


Comments: No Comments
Published on: August 10, 2011

It is TSQL Tuesda… err Wedn… err Tuesday for August 2011.  This month the party is a day later and bringing us oodles of failure.

Adam Machanic is hosting this month.  He thought it would be a good idea to poke and prod us while getting us to torture ourselves.  It seems to be a month long theme (as he alluded to in his announcement of the topic).

See, the topic this month requires us to share our failures.  More specifically we are to share our ugly code that we know better than to do.  Crap Code is a failure.  At least we have come to recognize that it was crap code and that it was indeed a failure.  The question now is this:  What have we done about what we learned?

I put a fair amount of thought into this topic.  I could come up with several possible experiences for this topic.  I kept running into a wall though.  That wall was recalling detailed facts and examples of those experiences.  Many of them were documented and left behind with former employers.  I no longer have the “dirt” on those dirty little secrets.

Then it hit me!  I had posted some blog entries about some of my experiences and was certain that something would be applicable.  You know what?  It’s a good thing I have this blog going.  Not only does it help me to learn many things and write better, it serves as a repository for introspection and recollection.  So, I will rely on a couple of posts from the early days of this blog to help write my entry for TSQL Tuesday this month.  And, I hope that what I share and what I learned from my experience will prove helpful to somebody else.

The FAILURE / Crap Code

If you want, you can read the entire first part here then skip to the next heading.  Or you can continue reading as I recount that article.

I had been asked to provide a report that would provide data according to pay periods.  I had no calendar table yet I needed to be able to compare dates passed to the report and match them to this pay period calendar.  Despite requests to the business (the pay periods for this report did not line up with the normal business pay periods) to get a list of those pay periods in order to create a Calendar table, I got nothing.  I did however receive generic instruction as to the general time-lines for these pay periods.  Based on this, I decided to get tricky and try to meet some basic guidelines that I created for myself.

  1. The payperiods need to be accessible to the query
  2. No Physical Lookup Table
  3. Do not hard-code the data in the code.

Based on that, I came up with a pretty neat little solution.

[codesyntax lang=”tsql”]


This worked well – for awhile.  Performance degraded over time, then it didn’t seem so cool.

Egg On Face

The update to that first article can be read in full here.  In short, that method really did not work out as well as I had hoped.  In the end, I had to create a table that would need to be maintained – at some point.  I populated the data in the table with 5 years worth of pay periods.  I hope somebody reads the documentation and takes care of that!

With the implementation of the table, I updated the CTE and the code being run by the report.  Performance of this updated version outperformed the first version – even when it was running fast.  I probably could have gotten away with doing an Itzik style numbers CTE in lieu of that recursive CTE – as I look back on it now.  Again, that is another level of learning and could potentially be an improvement on that first round of crap code I offered up for that report.


I learned from that experience.  First, it was a pretty neat little trick.  Secondly, I really should have tested it more.  Thirdly, I can likely still learn from it and improve on it because I am getting better at writing faster code and testing my proposed solutions (and I keep learning new tips/tricks).  Obviously something like this has not deterred me.  Quite the opposite really.  Because of a little failure like this, I work harder to get better at what I do.

FK Hierarchy v 2.1

Categories: News, Professional, SSC
Comments: 1 Comment
Published on: August 9, 2011

Last month I published an update to my Foreign Key Hierarchy script.  Today, I am providing a new update for that script.  A friend (Rémi Grégoire) helped out with some mods to this script.  The change for this month is nothing too intrusive.  The script is now updated for those databases that are Case Sensitive.

[codesyntax lang=”tsql”]


This update should make it more usable for any that may be using it or is interested in using it.  Thanks for Rémi for taking the time to propose this update.

Activity Monitor and Profiler

Tags: ,
Comments: 2 Comments
Published on: July 28, 2011

Today I came across a link to a neat little script for SQL 2005 / 2008 to help derive the head blocker in a blocking chain.  That script can be found here.  I was looking at the script and thought it looked pretty cool and also wondered why it might look a little familiar.  Not too big of a concern since many admin type scripts can look familiar.

Then I noticed that the title contained the words Activity Monitor.  I wondered to myself, was this taken from activity monitor?  So, I decided to play with Profiler for a bit to see if I could generate the same query.  The process really isn’t too difficult.  Here is what I did.

  1. Open Activity Monitor.  Simple enough to do.  Right click the instance (in SSMS) you wish to monitor, select Activity Monitor from the context menu.
  2. Open Profiler.  Again from within SSMS, click the tools menu and then select SQL Server Profiler.
  3. Start a New trace.  When starting the trace ensure that you are connecting to the same instance as you started Activity Monitor.  Select the TSQL-Replay template (other tsql will probably work, this is the one that I used).  Goto the events selection tab and click on the Column Filters button.  Select “LoginName” and then expand “Like” on the right hand side.  In the new text box, type the login you are using to authenticate to the SQL Instance in question (domain\user or SQLLogin).
  4. Click run and browse through the results for a bit.  It shouldn’t take too long for that query to pop up.
  5. Once you find the query, make sure you stop the trace.
What is the point in this?  Well, as explained in that other blog, sometimes it is just better to run a query than to run Activity Monitor.  Furthermore, you can take this method and trap many of the queries that Microsoft built into that tool.  You can either use them for learning or troubleshooting.  This is just one more tool to have in your arsenal in order to become better at your job.

Foreign Key Hierarchy Update

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: July 11, 2011

Today I would like to revisit a post of mine that is rather old.  More precisely, the script in that post needs revisiting.  This is one of my more favorite scripts and I still have more ideas to implement with it.  The post/script in question can be found here.

In revisiting this script, I simplified it a bit.  I also had to work on a problem with it that occurs in Hierarchies with circular dependencies.  Quite frankly, that was a huge pain in the butt.  There are some tricks out there to try and help with it – but I was having a hard time getting any of them to work in this scenario.  I also updated the script to better handle self-referencing objects.

When you have circular references, and are trying to recurse the tree via a CTE, an instant blocker comes into play.  You can only reference the anchor of the Recursive CTE once.  Fixing a circular reference would be many times easier if you could reference the anchor twice.

In the end, the biggest hint to getting this to work came from this post.  For it to work, I needed to find which combination of fields would work best.  I finally settled on using the Object_ID to help reduce my pain.  I settled on using the following in the anchor:

[codesyntax lang=”tsql”]


and the following in the recursive definition.

[codesyntax lang=”tsql”]


You can see that I am concatenating into a string for this column.  This seems to work well for the purpose of eliminating those circular references.

Other adjustments to the script are not quite as significant but there is a small performance gain to be seen by these subtle changes.  The most notable is the change to remove the two joins out to sys.columns in order to get the column names of the Parent and Child objects.  In lieu of these joins, I am using the COL_NAME() function.  This little change came to me thanks to a little work done last week on my statistics script that you can read here.

The final notable change comes in the naming of the CTEs in this script.  I decided to rename the CTEs to something a bit more meaningful.  In this case, Hierarchy and Ancestry are much more appropriate.

Without further adieu, here is the next major revision of that script.

[codesyntax lang=”tsql”]


I hope you will play with this script, test it out and make recommendations or even find bugs with it and let me know.

SQL Statistics – another Spin

Comments: 5 Comments
Published on: July 1, 2011

I was reading a blog by Pinal Dave that was about using catalog views to obtain information about stats for the entire database.  While reading the blog, I was reminded about an interesting tidbit of information I had seen once upon a time concerning statistics naming in SQL Server.

This got me looking for that information and burrowing down the rabbit hole.  I found the reference for the naming convention concerning auto generated statistics.  That reference is from Paul Randal and can be found here.  In that article, Paul outlines the five parts of the name of an auto-created statistic – with each part being separated by an underscore.  This got the wheels churning a bit more.

I had to go do a bit more research in order to put all the pieces together.  Based on the naming convention, I knew I needed to convert Hex to Integer, so I could see the object id (column or table).  You might think this would be straight forward, but the value in the name of the statistic is not a true Hex Value.  That value is a hex string and needs a bit of manipulation in order to convert from string to Hex.

After some searching, I found the solution.  Peter Debetta created this handy little function for this very purpose.  I found that script here.  After some testing, and comparing results I was pleased with how well it worked.  Now that I had the pieces necessary, I could venture further down that little rabbit hole.

First though, there is one more piece that I want to discuss.  Remember that naming scheme for that auto created statistics?  I noted that the name is separate by an underscore – each element of the name anyway.  I decided that I would use that underscore as my string delimiter and implement a string split function to break out the name parts.  Any number of splitter functions would work.  I chose to use the same function that was written about by Jeff Moden here.

Now all of the pieces have been referenced and are in play.  Let’s take a look at the query and some of those catalog views.

[codesyntax lang=”tsql”]


I started this little script out with three quick CTEs.  The first is simply to gather the auto created stats.  It is in this first CTE that the string splitter is utilized.  The next two help me to separate out the data so I can work with it.  The second CTE is to manage the data related to the column part of the statistics name.  In this CTE, you will note that I employ the use of that Hex function.  The third CTE is just like the second, but it treats the data related to the table part of the statistics name.

When splitting the data in that first CTE, and due to the naming format, we see that the column is always the fourth part of the name and the table is the fifth part.  The first part is really an empty result due to the name leading off with an underscore.

Once the data is grouped into those CTEs, I can then return it in a friendlier result set.  Here I use more functions that are built in with SQL server.  This reduces the number of catalog views to which I need to JOIN.  And since the CTEs are only working with statistics that are auto generated, I need to UNION in another query to pull in those statistics that are not auto created.  Note that this query only looks at two catalog views.

What is the point of such an elaborate query in the first part to get this information when I can simply use the second part to do all of it?  Not a whole lot of point other than to prove a point.  There is consistency between the queries.  There is consistency between the naming of those statistics and the object data that can be found in those catalog views.

Any of these methods will return the desired results concerning statistics in the database.  You have essentially two methods displayed by me, and then the method employed by Pinal.  Have fun with it!!

CTE, Recursion and Math

Tags: ,
Comments: No Comments
Published on: May 23, 2011

Earlier this month we had a TSQL Tuesday on the topic of CTEs.  I bailed on my submission because I already posted some CTE examples and was bone dry on what I could write about.

Later I saw a request for some help on creating combinations and permutations from TSQL.  I thought about this request and finally came up with the idea to use CTEs to do it.  The idea came about based on seeing multiple suggestions in that thread and then combining some of the pieces together.  Though not a suggestion, per-se, but seeing in the code submitted that a factorial was being used along with the sum of all integers from 1 to the given integer (or summation), I decided I would incorporate that into my solution.

I thus set out to create a solution that would use a CTE to generate those permutations, the summation, and the factorial.  In this article I will just be focusing on how I came to produce the factorial and summation using a little bit of Recursive CTE.  Also, we will see an alternative solution to calculating the summation.  For help in setting up the structure of a recursive CTE, you may want to refer to this article.

Here is the script first, and then I will explain later.

[codesyntax lang=”tsql”]


As you can see, I am using several CTEs in this script.  The first group of CTEs is to create a Dynamic Tally (Numbers) table.  The last two CTEs are performing the same thing but with a bit of filtering applied.  When you look at them, you should note that the first one is Casting the Factorial to a Numeric data type.  The last CTE is casting the Factorial to a FLOAT data type.  This isn’t necessarily required but more of an aesthetic for myself.  So any number greater than 33 will use the FLOAT data type and anything less than 34 will use the numeric data type.

The FLOAT data type will support those smaller numbers – that’s not an issue.  The Numeric data type will not support the larger values (it is limited to 38 characters).  We start running into arithmetic overflow at 34! and must use the FLOAT at that point.  The problem is that I prefer to avoid the overflow style of displaying the numbers except when necessary.  For simplicity sake, you could eliminate the filtering between the CTEs and simply use the FLOAT across the board.

Since I am trying to filter between the two and create a single row result set, I also have a second issue that is created by this code.  That issue is resolved and seen when performing the final select from the CTEs.  If the query requires the use of FLOAT, I will get an error message with an arithmetic overflow again unless I Convert the Numeric and Float results into something that is compatible between the two.  I chose to use VarChar(100) in this case.

Note, also, that I am using a FULL OUTER Join in the final query.  This is due to the fact that I will see results in either the CTEMathN or CTEMathF CTE but not both.

Now down into the nitty gritty.  Let’s look at CTEMathN to see how we are getting the factorial and the summation.

[codesyntax lang=”tsql”]


First we have the required anchor definition.  Second we have the recursion definition joined by the Union All and referencing the same CTE where both of these pieces reside.

The factorial is easy enough.  Since a Factorial is n! or n(n-1) we can multiply n by the previous n in the sequence.  We can get both of those values through our join statement where we join back out to the Tally CTE and showing an increment in the number (here I am join CTETally.n to CTEMathN +1).  This ensures we will move through the record set (or recurse).

The same principle applies for the summation.  We add each number in the sequence from 1 to n to retrieve our result.  So, much the same as the factorial, I add n to the previous value of n and proceed from there through the record set.

Note also that I threw an additional limiting factor on the where clause.  I want to make sure that I do not try to recurse through a record set that includes numbers outside of my range.

Now, since I only really care about the factorial and summation for the number I inserted in the variable at the beginning, I want to make sure that I have a filter added to the final select clause.  Here is that final filter…

[codesyntax lang=”tsql”]


This enforces a single record will be returned by selecting only the last iteration.  Iteration is another column in the CTEs that is used during our recursion process.  The iteration column just increments by a value of one with each pass.

Pretty easy eh?

Now, if all you need is a summation and it is not used to control result sets (like mine will be in the next bit where we discuss permutations), then you could rip out the aggregation stuff and just use this in the final select statement…

[codesyntax lang=”tsql”]


That would be in place of this bit…

[codesyntax lang=”tsql”]


Have fun with it!!

T-SQL Tuesday #18 – CTEs

Comments: 3 Comments
Published on: May 10, 2011

To CTE or not to CTE, that is the Question

So my post really has nothing to do with the title.  This is not a post to help you determine whether a CTE is appropriate or not.  Or is it?

This month, we have the 18th installment in the TSQLTuesday series.  We are being hosted by Bob Pusateri  (Blog|@SQLBob) this month.  The essence of the topic this month is around CTEs (common table expressions).  There are a great many uses for a CTE in SQL server and this was a nice addition to the product.

One thing I like about CTEs is how much cleaner the code looks to me.  Another benefit for me is the recursion that is available through the use of a CTE.  An observation about CTEs is that a common use would be to use them to replace inline derived tables (which lends to cleaner looking code for some).

Think Think Think

I gave this topic a good long thought.  As I thought about the topic, I came to the conclusion that I had nothing new or unique on the subject.  I did however have some scripts that I posted once upon a time that would work very well for this topic.  Though it is a bit of a cop out, it is an appropriate solution for this month.

In the case of the CTE that I have chosen, there are multiple CTEs being used.  I use the CTE to recurse through data, and then to recurse that same data again – in reverse.  This particular script was created to traverse through system catalogs and create a hierarchy of table relationships.  I use this hierarchy to better understand the structure of the database and the interrelationship of the data between objects.  It is a cheap way of mapping out the objects in an effort to better understand it.

I had thought about using this script once upon a time for a different TSQLTuesday, but thought better of it that time.  Since the original post is more than a year old, it is a good time to bring it up and use it again.  Without further adieu, you can read about that script and CTE here.  I hope you enjoy.

«page 2 of 7»

December 2014
« Nov    


Welcome , today is Wednesday, December 17, 2014