Dedupe Data CTE

Tags: ,
Comments: 2 Comments
Published on: March 21, 2011

As a DBA working with SQL Server, some data issues arise that need to be fixed.  One such data issue is the duplication of data.  Duplicate data can happen for a number of reasons.  One such reason is the absence of constraints such as Primary Keys.

Luckily, there are ways for the Data Professional to clean up such a quagmire.  First, let’s set the stage with a really simple example.

CREATE TABLE DupeDemo (DemoID INT, SomeInt INT)
 
INSERT INTO DupeDemo
SELECT TOP 10000
        DemoID       = ABS(CHECKSUM(NEWID()))%100+1,
		SomeInt      = ABS(CHECKSUM(NEWID()))%50+1
	FROM Master.dbo.SysColumns t1
CROSS JOIN Master.dbo.SysColumns t2

Here, we are simply creating a table and populating the table with two columns of randomized data.  This should give us a significant amount of duplicate data and a good enough test bed to demonstrate how to quickly fix this issue.  Due to the random nature of these inserts, we will get a significant range of duplicates for some values and maybe no duplicates at all for other values.  (Did that make sense?)

Let’s check our work and see how well we did with the duplicate creation.

SELECT *
	FROM DupeDemo
	ORDER BY DemoID,SomeInt

Each time I have run this, I have come up with a good test bed.  It should work relatively well for you as well.  Now, we can work on eliminating those duplicates.  To better illustrate that we have dupes and the distribution, let’s have a look at this query.

SELECT DemoID,SomeInt, COUNT(SomeInt) AS Counted
	FROM DupeDemo
	GROUP BY DemoID,SomeInt
	ORDER BY DemoID,SomeInt

This query will yield results like what we see in the image to the right.

Similarly, I can run a query that would be contained within a CTE and take advantage of the ROW_Number() function to help find where I may have dups.

WITH dedupe AS (
	SELECT DemoID, SomeInt
		,ROW_NUMBER() OVER (partition BY DemoID,SomeInt ORDER BY Someint) AS Rownum
		FROM DupeDemo
		)
SELECT * FROM dedupe
	WHERE Rownum > 1
		ORDER BY DemoID,SomeInt

Now that we know for certain that we have a large amount of dupes, lets finally work on eliminating them.  We can use a query just like the last one and alter it a bit to make it work so we can delete the dups.  Under normal circumstances, I would make absolutely certain that the tables to be affected by these deletes were backed up.  You can accomplish that by either copying the data in the table into a new table or you can run a database backup.

WITH dedupe AS (
	SELECT DemoID, SomeInt
		,ROW_NUMBER() OVER (partition BY DemoID,SomeInt ORDER BY Someint) AS Rownum
		FROM DupeDemo
		)
DELETE dedupe
	WHERE Rownum > 1

Really easy script.  This will delete all source table records that are dupes.  You can validate that by rerunning either of the first two queries used to demonstrate that duplicate data existed.  This is illustrated in the following image.

Since SQL 2005, removing bad duplicate data has become substantially easier.  This example should help to demonstrate that as well as provide a method to create a repeatable test while learning the technique.

2 Comments - Leave a comment
  1. Dude, you saved me! This post helped me solve a problem in 30 seconds.
    Well done.

Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>






Calendar
March 2011
M T W T F S S
« Feb   Apr »
 123456
78910111213
14151617181920
21222324252627
28293031  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, April 23, 2014