Dedupe Data CTE

Tags: ,
Comments: 2 Comments
Published on: March 21, 2011

As a DBA working with SQL Server, some data issues arise that need to be fixed.  One such data issue is the duplication of data.  Duplicate data can happen for a number of reasons.  One such reason is the absence of constraints such as Primary Keys.

Luckily, there are ways for the Data Professional to clean up such a quagmire.  First, let’s set the stage with a really simple example.

Here, we are simply creating a table and populating the table with two columns of randomized data.  This should give us a significant amount of duplicate data and a good enough test bed to demonstrate how to quickly fix this issue.  Due to the random nature of these inserts, we will get a significant range of duplicates for some values and maybe no duplicates at all for other values.  (Did that make sense?)

Let’s check our work and see how well we did with the duplicate creation.

Each time I have run this, I have come up with a good test bed.  It should work relatively well for you as well.  Now, we can work on eliminating those duplicates.  To better illustrate that we have dupes and the distribution, let’s have a look at this query.

This query will yield results like what we see in the image to the right.

Similarly, I can run a query that would be contained within a CTE and take advantage of the ROW_Number() function to help find where I may have dups.

Now that we know for certain that we have a large amount of dupes, lets finally work on eliminating them.  We can use a query just like the last one and alter it a bit to make it work so we can delete the dups.  Under normal circumstances, I would make absolutely certain that the tables to be affected by these deletes were backed up.  You can accomplish that by either copying the data in the table into a new table or you can run a database backup.

Really easy script.  This will delete all source table records that are dupes.  You can validate that by rerunning either of the first two queries used to demonstrate that duplicate data existed.  This is illustrated in the following image.

Since SQL 2005, removing bad duplicate data has become substantially easier.  This example should help to demonstrate that as well as provide a method to create a repeatable test while learning the technique.

2 Comments - Leave a comment
  1. Dude, you saved me! This post helped me solve a problem in 30 seconds.
    Well done.

Leave a comment

Your email address will not be published. Required fields are marked *

March 2011
« Feb   Apr »


  • @SQLSoldier: @jonleelockwood I don't understand the question. Do you mean how long will SQL wait for an IO request before it times out? #sqlhelp
  • @jonleelockwood: #sqlhelp what is the max latency for sql 2012 to think tempdb is missing? Thanks.
  • @sqL_handLe: #sqlhelp Anyone monitor ESXTop migrations/sec, switches/sec, wakeups/sec for #SQLServer on VMware? Seems like lotta guesswork with %Ready.
  • @A_Bansal: No 64 GB. So which 32 GB Mobile workstation do u recommend? #sqlhelp
  • @Lee____Cam: I've managed to drop my AG now and recovered my do but it won't allow the recreation of the AG. Looks like it's failing at the FCM #sqlhelp
  • @Lee____Cam: I have a 2 note always on ag, both showing not synchronizing/recovery pending. I can't get them online or remove the AG. Any ideas #sqlhelp
  • @SQLSoldier: @mvelic Yes, but then I pointed out that they were using nolock and SSIS isn't. The matching records were not committed. #sqlhelp #TrueStory
  • @mvelic: It's just maddening because this lookup has *always* worked in the past. It's just now deciding to not recognize matches. #sqlhelp
  • @mvelic: Has anyone just seen an SSIS Lookup fail to make matches? You know the matches exist, but it doesn't connect them and it fails? #sqlhelp
  • @banerjeeamit: @MattPgh No. Current processing report is not visible. This is visible in RunningJobs table but not the stats breakdown. #sqlhelp

Welcome , today is Saturday, October 10, 2015