Removing duplicate rows in PostgreSQL

04.2017 / Category: / Tags: |

After some time of absence due to a high workload I finally got around to write more on an issue, which has been on my mind for quite a while: Removing duplicate rows.

It happens from time to time that PostgreSQL support clients accidentally load data twice or somehow manage to duplicate data (human error, bugs in applications, etc). The main question is, how can the problem be fixed? How can developers handle duplicate data and cleanup things? This post tries to answer exactly that question.

Detecting and finding duplicate entries

To demonstrate how duplicates can be found, I have prepared some sample data:

The idea is to store a couple of German cities:

Obviously some cities show up more than once:

“Dortmund” and “Frankfurt” are the ones, which have been added more than once and need a proper cleanup.

Identifying rows

Before we can approach the problem head on I want to tell you about a thing called “ctid”, which is used in PostgreSQL to identify the row in a table:

(0, 1) means that we are looking at the first block and the first row. (0, 2) is the second row in the first 8k block and so on. The trouble is: There is no such thing as “DELETE … LIMIT 1”. Therefore we have to find a way to use the “hidden“ columns to solve the problem. The ctid gives us a free identifier, which can help to delete just a single row.

Here is a query, which returns the first incarnation of a row:

Now that we have identified all the duplicate rows in the table, it is possible to write a DELETE statement, which just kills the undesired data:

As you can see the two rows we don't want any more are removed from the table. Mind that the ctid is not a static thing. UPDATE statements, VACUUM FULL, etc. can actually change it. Therefore it is highly recommended to perform this operation only if you are the only one on the system and if the table is locked appropriately.


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

0 0 votes
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
7 years ago

Very useful and easy to understand. This will go to my personal documentation 🙂

7 years ago

Hey, thanks for the article. Take a loop at this one as well

Hans-Jürgen Schönig
Hans-Jürgen Schönig
7 years ago
Reply to  Vyacheslav

this one is know too. however, we should not assume that we have unique IDs. if there is no primary key, the assumption might be false 🙂

7 years ago

Thanks for pointing out how to use tuple notation. Before that I would convert the row to json row_to_json(*). I didn't know that this is unnecessary.

Golokesh Patra
Golokesh Patra
6 years ago


DROP TABLE IF EXISTS backupOfTheTableContainingDuplicates;

CREATE TABLE aNewEmptyTemporaryOrBackupTable
AS SELECT DISTINCT * FROM originalTableContainingDuplicates;

TRUNCATE TABLE originalTableContainingDuplicates;

INSERT INTO originalTableContainingDuplicates SELECT * FROM
aNewEmptyTemporaryOrBackupTable ;

DROP TABLE aNewEmptyTemporaryOrBackupTable ;


The 1st query ensures, If you have any backup/temporary table of the original table containing duplicates then first drop that table.

The 2nd query, creates a new table(Temporary/Backup) table with unique entries in the original Table containing duplicate, so the new temporary table is same as the original table MINUS the duplicate entries.

The 3rd Query, truncates or empties the original table.

the 4th Query, inserts or copies all the unique entries in the temporary table to the original table which has been recently truncated (So has no Data). After this query is executed, the Original Table will be populated with UNIQUE data that was in the temporary table.

The 5th Query, removes/drops the unnecessary temporary table.

So End result is, the original table has only UNIQUE ENTRIES and no duplicates.

Salah Jubeh
Salah Jubeh
7 years ago

You can delete duplicates without CTE and GROUP BY clause by using DELETE statement. The following is much faster than using group by an less complicated.

delete from t_location a using t_location b where (,, and a.ctid < b.ctid;

4 years ago

Hi Hans-Jurgen, very clever approach, thanks.
But what if instead to delete those rows we would like to identify them by adding a field?
It would allow to merge duplicates or select the one who really must be deleted...
Imagine you have a list of contacts, in those duplicated ones some operators has added info into some records but not in every records (records with some data and records without).
I'm actually struggling to find this in SQL (postgres actually) while it is relatively easy in a spreadsheet.


Douglas H. Bradshaw
5 years ago

That's really helpful -- I didn't know about ctid. Thanks!

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf

+43 (0) 2622 93022-0

Get the newest PostgreSQL Info & Tools

    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    CYBERTEC PostgreSQL International GmbH
    Would love your thoughts, please comment.x
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram