In 99% of all cases the PostgreSQL planner does a perfect job to optimize your queries and to make sure that you enjoy high performance and low response times. The infrastructure ensuring this is both sophisticated as well as robust. However, there are some corner cases which can turn out to be quite nasty. In some cases, the planner has simply no real chance and has to rely on guesswork to come up with a reasonable plan.
The question now is: How can a user figure out where things go wrong? Here is an example …
Dissecting a query …
The goal here is to see, how a simple query can be dissected and analyzed. Here is the query we want to optimize:
SELECT * FROM pg_stats;
Looks like a simple thing, doesn’t it? Well, not quite … Let us take a look at the definition of this system view:
SELECT ... ... FROM pg_statistic s JOIN pg_class c ON c.oid = s.starelid JOIN pg_attribute a ON c.oid = a.attrelid AND a.attnum = s.staattnum LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE NOT a.attisdropped AND has_column_privilege(c.oid, a.attnum, 'select'::text);
Still, this looks like a simple query. Let us take a look at the execution plan:
test=# explain SELECT * FROM pg_stats; QUERY PLAN ---------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=47.28..135.42 rows=6 width=535) -> Hash Join (cost=47.14..133.69 rows=6 width=475) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text) -> Seq Scan on pg_attribute a (cost=0.00..68.19 rows=2419 width=70) Filter: (NOT attisdropped) -> Hash (cost=40.75..40.75 rows=426 width=421) -> Hash Join (cost=14.64..40.75 rows=426 width=421) Hash Cond: (s.starelid = c.oid) -> Seq Scan on pg_statistic s (cost=0.00..20.26 rows=426 width=349) -> Hash (cost=10.95..10.95 rows=295 width=72) -> Seq Scan on pg_class c (cost=0.00..10.95 rows=295 width=72) -> Index Scan using pg_namespace_oid_index on pg_namespace n (cost=0.13..0.19 rows=1 width=68) Index Cond: (oid = c.relnamespace) Planning time: 1.746 ms (15 rows)
Again, this one looks pretty sane. As always we need to read the thing from inside to outside. Therefore the first thing to see is the sequential scan on pg_class. 295 lines are expected. Then comes the next sequential scan as well as the hash join. With overall costs of 136 and 6 rows the plan looks very promising.
Let us look what happens underneath. “explain analyze” will reveal what is really going on:
test=# explain analyze SELECT * FROM pg_stats; QUERY PLAN --------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=47.28..135.42 rows=6 width=535) (actual time=9.284..14.197 rows=400 loops=1) -> Hash Join (cost=47.14..133.69 rows=6 width=475) (actual time=9.245..13.215 rows=400 loops=1) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text) -> Seq Scan on pg_attribute a (cost=0.00..68.19 rows=2419 width=70) (actual time=0.006..3.282 rows=2328 loops=1) Filter: (NOT attisdropped) -> Hash (cost=40.75..40.75 rows=426 width=421) (actual time=0.827..0.827 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 143kB -> Hash Join (cost=14.64..40.75 rows=426 width=421) (actual time=0.133..0.538 rows=400 loops=1) Hash Cond: (s.starelid = c.oid) -> Seq Scan on pg_statistic s (cost=0.00..20.26 rows=426 width=349) (actual time=0.011..0.106 rows=400 loops=1) -> Hash (cost=10.95..10.95 rows=295 width=72) (actual time=0.113..0.113 rows=299 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 31kB -> Seq Scan on pg_class c (cost=0.00..10.95 rows=295 width=72) (actual time=0.002..0.050 rows=299 loops=1) -> Index Scan using pg_namespace_oid_index on pg_namespace n (cost=0.13..0.19 rows=1 width=68) (actual time=0.001..0.001 rows=1 loops=400) Index Cond: (oid = c.relnamespace) Planning time: 0.471 ms Execution time: 14.294 ms (18 rows)
Again the idea is to read things from inside out. Now we need two brackets for each node: One for “cost” and one for “actual time”: The important thing here is that costs are measured in abstract units – “actual time”, however, is measured in milliseconds. What we see here is that the planner estimated 295 rows and we got 299 rows in reality. Pretty good, no? The time spent on this operation is close to zero – a very good sign. Let us move up the plan now: The sequential scan on pg_statistics expects 426 lines. We get slightly less, which is pretty good.
Moving up the plan, pg_attribute reveals that those estimates are pretty close to what PostgreSQL has predicted. 2419 vs 2328 lines sounds like a pretty reasonable thing as well.
When things start to be a little off …
Then we can see a couple of interesting lines:
-> Hash Join (cost=47.14..133.69 rows=6 width=475) (actual time=9.245..13.215 rows=400 loops=1) Hash Cond: ((a.attrelid = c.oid) AND (a.attnum = s.staattnum)) Join Filter: has_column_privilege(c.oid, a.attnum, 'select'::text)
The estimate here is just 6 lines – in reality, however, we get a set of 400 rows. What a whopping difference.
Why does it matter? If an estimate is very low, it can easily happen that a “nested loop” on top of this is heavily underestimated. With just 400 lines we cannot expect issues here – but what if you end up with a nested loop on a million rows if there should be just 500 rows around? It can easily turn out to be a disaster.
So, how do we end up with this wrong estimate? The problem here is the join filter. To see, if something is allowed or not, the view calls the has_column_privilege() function. The clue here is that PostgreSQL has absolutely no statistics about the return values of this function. As I have logged in as superuser, I happen to be able to read everything. And voila, this is where the wrong estimate comes from.
Of course, in my example this is not dangerous at all. It just shows nicely, how things can become a hidden bomb.