Adding an index can decrease SELECT performance

06.2018 / Category: / Tags: |
A bad query plan ...
 © Laurenz Albe 2018


We all know that you have to pay a price for a new index you create — data modifying operations will become slower, and indexes use disk space. That's why you try to have no more indexes than you actually need.

But most people think that SELECT performance will never suffer from a new index. The worst that can happen is that the new index is not used.

However, this is not always true, as I have seen more than once in the field. I'll show you such a case and tell you what you can do about it.

An example

We will experiment with this table:

We want to find the first twenty interesting rows in category 42:

This performs fine:

PostgreSQL uses the index to find the 1000 rows with category 42, filters out the ones that are not interesting, sorts them and returns the top 20. 5 milliseconds is fine.

A new index makes things go sour

Now we add an index that can help us with sorting. That is definitely interesting if we often have to find the top 20 results:

And suddenly, things are looking worse:

What happened?

PostgreSQL thinks that it will be faster if it examines the rows in sort order using the index until it has found 20 matches. But it doesn't know how the matching rows are distributed with respect to the sort order, so it is not aware that it will have to scan 69042 rows until it has found its 20 matches (see Rows Removed by Filter: 69022 in the above execution plan).

What can we do to get the better plan?

PostgreSQL v10 has added extended statistics to track how the values in different columns are correlated, but that does not track the distributions of the values, so it will not help us here.

There are two workarounds:

  1. Drop the index that misleads PostgreSQL. If that is possible, it is a simple solution. But usually one cannot do that, because the index is either used to enforce a unique constraint, or it is needed by other queries that benefit from it.
  2. Rewrite the query so that PostgreSQL cannot use the offending index. Of the many possible solutions for this, I want to present two:
    • A subquery with OFFSET 0:

      This makes use of the fact that OFFSET and LIMIT prevent a subquery from being “flattened”, even if they have no effect on the query result.

    • Using an expression as sort key:

      This makes use of the fact that PostgreSQL cannot deduce that sort + 0 is the same as sort. Remember that PostgreSQL is extensible, and you can define your own + operator!


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

5 1 vote
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Joshua Otwell
Joshua Otwell
5 years ago

Your blog posts are great. The drawings that kick most of them off really are a nice addition and set them apart. Thanks for sharing.

Rob Johnson
Rob Johnson
6 years ago

Alternatively, you could create the second index as a conditional index, and leave the query as-is:

CREATE INDEX skewed_sort_idx ON skewed (sort) where interesting;

With the conditional index, the original query finished in 8 ms on my machine. My timings in the other two cases matched yours, so this conditional index still degrades performance, but not as much.

6 years ago
Reply to  Rob Johnson

You are right, that is also an alternative.

Luca Veronese
Luca Veronese
5 years ago

The best index here would be
CREATE INDEX skewed_sort_idx ON skewed (category, sort) where interesting;

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf

+43 (0) 2622 93022-0

Get the newest PostgreSQL Info & Tools

    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    CYBERTEC PostgreSQL International GmbH
    Would love your thoughts, please comment.x
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram