UPDATED March 2023: In this post, we’ll focus our attention on PostgreSQL performance and detecting slow queries. Performance tuning does not only mean adjusting postgresql.conf properly, or making sure that your kernel parameters are properly tuned. Performance tuning also implies that you need to first find performance bottlenecks, isolate slow queries and understand what the system is doing in more detail.

Your best friend to detect slow queries: pg_stat_statements

After all these years, I still strongly believe that the best and most efficient way to detect performance problems is to make use of pg_stat_statements, which is an excellent extension shipped with PostgreSQL. It’s used to inspect general query statistics. It helps you to instantly figure out which queries cause bad performance, and how often they are executed.

What is performance tuning?

Over the years, I have begun to get the impression that for most people “tuning” is limited to adjusting some magical PostgreSQL parameters. Sure, parameter tuning does help, but I can assure you, there is no such thing as “speed = on”. It does not exist and it most likely never will. Instead, we have to fall back on some pretty “boring techniques” such as inspecting queries to figure out what is going on in your database.

Where does database load come from?

There is one law of nature that has been true for the past 20 years and will most likely still hold true 20 years from now:

Queries cause database load

And slow queries (but not only) are the main reason for such load.

Armed with this important, yet highly under-appreciated wisdom, we can use pg_stat_statements to figure out which queries have caused the most load, and tackle those – instead of wasting time on irrelevant guess work.

Installing pg_stat_statements in PostgreSQL

As mentioned above, pg_stat_statements comes as part of PostgreSQL. All you have to do to enjoy the full benefit of this exceptionally useful extension and take a bite out of your slow queries – is to enable it.

The first thing you have to do is to change shared_preload_libraries in postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

Then restart PostgreSQL.

Finally, the module can be enabled in your desired database:

test=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION

The last step will deploy a view – we will need to inspect the data collected by the pg_stat_statements machinery.

Taking a look at pg_stat_statements

pg_stat_statements provides a ton of useful information. Here is the definition of the view, as of PostgreSQL 15. Note that the view has been growing over the years and more and more vital information is added as PostgreSQL is steadily extended:

test=# \d pg_stat_statements
                      View "public.pg_stat_statements"
         Column         |       Type       | Collation | Nullable | Default 
------------------------+------------------+-----------+----------+---------
 userid                 | oid              |           |          | 
 dbid                   | oid              |           |          | 
 toplevel               | boolean          |           |          | 
 queryid                | bigint           |           |          | 
 query                  | text             |           |          | 
 plans                  | bigint           |           |          | 
 total_plan_time        | double precision |           |          | 
 min_plan_time          | double precision |           |          | 
 max_plan_time          | double precision |           |          | 
 mean_plan_time         | double precision |           |          | 
 stddev_plan_time       | double precision |           |          | 
 calls                  | bigint           |           |          | 
 total_exec_time        | double precision |           |          | 
 min_exec_time          | double precision |           |          | 
 max_exec_time          | double precision |           |          | 
 mean_exec_time         | double precision |           |          | 
 stddev_exec_time       | double precision |           |          | 
 rows                   | bigint           |           |          | 
 shared_blks_hit        | bigint           |           |          | 
 shared_blks_read       | bigint           |           |          | 
 shared_blks_dirtied    | bigint           |           |          | 
 shared_blks_written    | bigint           |           |          | 
 local_blks_hit         | bigint           |           |          | 
 local_blks_read        | bigint           |           |          | 
 local_blks_dirtied     | bigint           |           |          | 
 local_blks_written     | bigint           |           |          | 
 temp_blks_read         | bigint           |           |          | 
 temp_blks_written      | bigint           |           |          | 
 blk_read_time          | double precision |           |          | 
 blk_write_time         | double precision |           |          | 
 temp_blk_read_time     | double precision |           |          | 
 temp_blk_write_time    | double precision |           |          | 
 wal_records            | bigint           |           |          | 
 wal_fpi                | bigint           |           |          | 
 wal_bytes              | numeric          |           |          | 
 jit_functions          | bigint           |           |          | 
 jit_generation_time    | double precision |           |          | 
 jit_inlining_count     | bigint           |           |          | 
 jit_inlining_time      | double precision |           |          | 
 jit_optimization_count | bigint           |           |          | 
 jit_optimization_time  | double precision |           |          | 
 jit_emission_count     | bigint           |           |          | 
 jit_emission_time      | double precision |           |          |

The danger here is that people get lost in the sheer volume of information. Therefore it makes sense to process this data a bit to extract useful information.

Making use of pg_stat_statements in PostgreSQL

To make it easier for our readers to extract as much information as possible from pg_stat_statements, we have compiled a couple of queries that we have found useful over the years.
The most important one is used to find out which operations are the most time-consuming.

Here is the query:

test=# SELECT substring(query, 1, 40) AS query, calls,
     round(total_exec_time::numeric, 2) AS total_time,
     round(mean_exec_time::numeric, 2) AS mean_time,
     round((100 * total_exec_time / sum(total_exec_time) 
				 OVER ())::numeric, 2) AS percentage
FROM  pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
                  query                   |  calls  | total_time | mean_time | percentage 
------------------------------------------+---------+------------+-----------+------------
 SELECT  * FROM t_group AS a, t_product A | 1378242 |  128913.80 |      0.09 |      41.81
 SELECT  * FROM t_group AS a, t_product A |  900898 |  122081.85 |      0.14 |      39.59
 SELECT relid AS stat_rel, (y).*, n_tup_i |      67 |   14526.71 |    216.82 |       4.71
 SELECT $1                                | 6146457 |    5259.13 |      0.00 |       1.71
 SELECT  * FROM t_group AS a, t_product A | 1135914 |    4960.74 |      0.00 |       1.61
 /*pga4dash*/                            +|    5289 |    4369.62 |      0.83 |       1.42
 SELECT $1 AS chart_name, pg              |         |            |           | 
 SELECT attrelid::regclass::text, count(* |      59 |    3834.34 |     64.99 |       1.24
 SELECT  *                               +|  245118 |    2040.52 |      0.01 |       0.66
 FROM    t_group AS a, t_product          |         |            |           | 
 SELECT count(*) FROM pg_available_extens |     430 |    1383.77 |      3.22 |       0.45
 SELECT   query_id::jsonb->$1 AS qual_que |      59 |    1112.68 |     18.86 |       0.36
(10 rows)

What you see here is how often a certain type of query has been executed, and how many milliseconds of total execution time we have measured. Finally, I have put the numbers into context and calculated the percentage of total runtime for each type of query. Also, note that I have used substring to make the query shorter. The reason for that is to make it easier to display the data on the website. In real life, you usually want to see the full query.

The upshot

The main observations in my example are that the first two queries already need 80% of the total runtime. In other words: The rest is totally irrelevant. One can also see that the slowest query by far (= the loading process) has absolutely no significance in the bigger picture. The real power of this query is that we instantly spot the most time-consuming operations.
However, sometimes it is not only about runtime – often I/O is the real issue. pg_stat_statements can help you with that one as well. But let’s first reset the content of the view:
 

test=# SELECT pg_stat_statements_reset();
pg_stat_statements_reset
--------------------------
 
(1 row)

 
Measuring I/O time is simple. The track_io_timing parameter can be adjusted to measure this vital KPI. You can turn it on in postgresql.conf for the entire server, or simply adjust things on the database level if you want more fine-grained data:
 

1	test=# ALTER DATABASE test SET track_io_timing = on;
2	ALTER DATABASE

 
In this example, the parameter has been set for this one specific database. The advantage is that we can now inspect I/O performance and see whether we have an I/O or a CPU problem:
 

test=# SELECT substring(query, 1, 30), 
			total_exec_time, 
			blk_read_time, 
			blk_write_time 
FROM 	pg_stat_statements 
ORDER BY blk_read_time + blk_write_time DESC 
LIMIT 10;
           substring            |  total_exec_time   |   blk_read_time    | blk_write_time 
--------------------------------+--------------------+--------------------+----------------
 SELECT relid AS stat_rel, (y). | 14526.714420000004 |        9628.731881 |              0
 SELECT attrelid::regclass::tex | 3834.3388820000005 | 1800.8131490000003 |       3.351335
 FETCH 100 FROM c1              |   593.835973999964 | 143.45405699999006 |              0
 SELECT   query_id::jsonb->$1 A |        1112.681625 |  72.39612800000002 |              0
 SELECT   oid::regclass, relkin |         536.750372 | 57.409583000000005 |              0
 INSERT INTO deep_thinker.t_adv |  90.34870800000012 | 46.811619999999984 |              0
 INSERT INTO deep_thinker.t_thi |  72.65854599999999 |          43.621994 |              0
 create database xyz            |          97.532209 |          32.450164 |              0
 WITH x AS (SELECT c.conrelid:: | 46.389295000000004 | 25.007044999999994 |              0
 SELECT * FROM (SELECT relid::r | 511.72187599999995 | 23.482600000000005 |              0
(10 rows)

 
What we have done here is to compare the total_exec_time (= total execution time) to the time we actually used up for I/O. In case the I/O time is a significant fraction of the overall time we are bound by the I/O system – otherwise additional disks (= IOPS) won’t be beneficial. track_io_timing is therefore essential to determine whether the bottleneck is a CPU or a disk thing.
But there is more: If you are looking for good performance, it makes sense to consider temporary I/O as a potential factor. temp_blks_read and temp_blks_written are the important parameters here. But keep in mind that simply throwing work_mem at temporary I/O is not usually the solution. In the case of normal everyday operations, the real problem is often a missing index.

Finally…

pg_stat_statements is an important topic, but it is often overlooked. Hopefully more and more PostgreSQL users will spread the word about this great extension! If you want to find out more about performance, we have tons of other useful tips for you. Maybe you want to check out our post about 3 ways to detect slow queries, or find out about checkpoints and checkpoint performance in PostgreSQL.

Also, try watching some of the exciting new videos on our YouTube channel. We post fresh content there on a regular basis. If you enjoy the videos, like us and subscribe!

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.