Primary Key Discovery

Categories: News, Professional, Scripts
Comments: 5 Comments
Published on: January 5, 2010

As a part of my DB documentation process for a large database, I needed a method to quickly show me all of the Primary Keys in the database.  Not only did I want the PKs, but I wanted to know the DataType for each of the Primary Keys.

I began by fiddling with some queries first and then by searching on Google for something that might be fitting.  I found a couple of queries (one which was very similar to what I already had).  The queries used different views and thus sparked a question for me – which method was better?

So let’s start with the base query.

From this query, I can get the Column Name, Constraint Name (PK) and the Table to which it belongs.

Execution Plan for that Query:

With the Following IO Stats:

And here is the alternative base query:

The Execution Plan is much larger, and thus I have added the XML Plan for download via – Alt BaseQuery ExecutionPlan.

And the IO stats:

From both queries, we can see that they are returning the same data, same number of rows, and for this set particular run – both run in under 1 second (109 ms for Query 1 and 156 ms for Query 2).  For this base query both can be deemed acceptable.  However, I want to emphasize that Query 2 took nearly 50% longer to process in this test.

Now that we have a baseline, I need to be able to retrieve the data type for each of the columns in each primary key.  This was a part of the initial self-prescribed requirements.  Adding in the additional components for each query is a little different.  Once added in, I will be doing the final comparison in a single batch to illustrate a little more poignantly the differences in processing between these queries.

Query 1 needs the following additions (lines 3, and 9-13)

Query 2 needs the following changes (Lines 18, and 22-26)

When I run these in tandem, the query plan for Query 1 changes somewhat, and the Plan for Query 2 a little more.  Comparing the two, I see that Query 2 has a cost of 72% for the batch, and Query 1 has the remaining 28% for the batch.

Here is the XML Plan for both for download: Comparison_Plans.

Now the IO Stats:

Notice the one Table that is substantially and surprisingly different between the two sets.  In Query 1, sysiscols has 2119 scans and 4243 logical reads.  Both are substantially higher than Query 2 at 229 Scans and 459 logical reads.  Despite this information, we can find that Query 1 executes (during these tests) better than Query2.  The result of the testing could change depending on information used in testing.  If I were to just consider the IO stats, I may well opt for Query 2.  However, when I consider all factors (used in these tests), my results tell me to use Query1.

Download Scripts For this Article here.

5 Comments - Leave a comment
  1. […] Part I, I discussed a couple of scripts that could be used to help discover some vital information about […]

  2. I apologise, but, in my opinion, you are not right. Write to me in PM.

  3. Jason Brimhall says:

    I am interested in learning where you disagree with this post.

  4. quoted on SQL Server Central says:

    […] Part I and Part II of the series, I discussed documenting and discovering Primary Keys and Clustered […]

  5. […] Part I and Part II of the series, I discussed documenting and discovering Primary Keys and Clustered […]

Leave a comment

Your email address will not be published. Required fields are marked *

January 2010
« Dec   Feb »


  • @SQLSoldier: @sdwade4 Where are you seeing this? What version and what is the db compatibility level? What are you changing recovery model to? #sqlhelp
  • @sdwade4: seeing setting database option recovery to RECMODEL_70BACKCOMP. What does this mean and why am I receiving it? #sqlhelp
  • @SQLTrooper: @Mushtaq_M I just tested and it does not prompt me to reinit subs, so it would appear no. I will further test though. #sqlhelp
  • @Mushtaq_M: Can we stop replicating deletes on article in transactional Repl w/o generating snapshot for the publicaton #sqlhelp
  • @SQLBek: @cfgears Your current workload actually need more than that? Try a buffer pool diag query? #sqlhelp
  • @NedOtter: @cfgears Exactly what have you done to increase SQL memory? #sqlhelp
  • @cfgears: #sqlhelp I have SQL 2014 Dev Edition on Win 10 with 10GB RAM allocated to SQL. No matter what, SQL caps at 1.6GB RAM. Suggestions?
  • @nitin_alwar: How to fetch bigint datatype from table to variable in ssis. Need to use the variable later in the package as input parameter #sqlhelp
  • @SQLSoldier: @MartynJones76 @YetAnotherSQL @SQLGlenn In SQL 2014+, you can see CU level with ServerProperty('ProductUpdateLevel') #sqlhelp
  • @MrACutler: Anyone using Azure Data Catalog? I don't seem to be able to sort the "Name" column alphabetically, objects randomly ordered #sqlhelp

Welcome , today is Wednesday, May 4, 2016