Index Info

Categories: News, Professional, Scripts
Tags: , ,
Comments: 2 Comments
Published on: March 31, 2010

I recently came across an interesting script to help show index info and usage.  As I began to dissect the script, I noticed that the script was intended for SQL 2005 and beyond but was using SQL 2000 compatibility views.  I decided I would take it upon myself to update the script to use the 2005 objects and not the compatibility objects.

The design of the script is to give a basic view of the usage of the index and then try to evaluate the cost of that index.  If the index updates far outweigh the queries that use it, then the index may not be a useful index.  Furthermore, the index cost probably outweighs its use.

The original query employed the use of dbo.sysobjects and dbo.sysindexes.  The reason for the use of dbo.sysindexes was merely to calculate the size of the index.  The use of dbo.sysobjects seemed like an oversight.  The area that I ran into a sticky point was with calculating the size of the index.  The problem was in figuring out a method to execute the entire query and have it perform as well as the original query.  The first attempt yielded a query that was absolutely abysmal.

[codesyntax lang=”tsql”]


The method I am using to calculate the size whilst using the SQL 2005 objects was to use the function sys.dm_db_index_physical_stats.  I employed this directly as a subquery in the above posted query.  This version takes in excess of 7 minutes to return results.  I moved on from it to find a better method.

The next attempt was to move the subquery into a CTE.

[codesyntax lang=”tsql”]


This version returns in about thirty seconds.  Still not acceptable but substantially better.  The original query returned in roughly three seconds.  It is also important to note that the accuracy of both queries is equal.  Since both return the same result set, the only focus is on performance.  The delay in this query completely revolves around the use of the function.  Armed with the knowledge that the real delay is in the use of that function, there needs to be alternative method to pull back that information.  I decided to use a temp table to stage the data.  Using this temp table I could test directly the efficiency of the query.

[codesyntax lang=”tsql”]


The population of the temp table consumes 97% of the query cost.  The prior version of the query also showed a cost of 97% associated directly with the function.  If I populate the temp table prior to the execution of the remainder of the script, then I can compare performance of this script versus the original.  When comparing, this version runs in about 1 second.

The next steps from here would be to employ a method that populates a table in an “Admin” database.  By using a pre-populated table I can maintain good performance to gain the information concerning Index Usage.

2 Comments - Leave a comment
  1. […] This new idea was born out of necessity for me about a year ago.  I was having a hard time finding why my primary file group was still so large after moving all of the User objects out of that filegroup and into new filegroups.  I couldn’t find the tying factor – until I checked for BLOBs that were created in the Primary Filegroup by default.  To move them was a manual process and is a different topic from the crux of this one.  Since the initial report to find the BLOBs, I have evolved it somewhat to try and find the size of each object, as well as to report on the usage related with each.  To do this, I needed to use a method I posted about in a different post about finding some Index Information. […]

  2. See SQLServercentral from some changes I made to this (original) script.
    Always nice to see how we (the community) help each other!

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

March 2010
« Feb   Apr »

Welcome , today is Friday, June 5, 2020