Query performance in PostgreSQL 13 RC1

09.2020 / Category: / Tags: | |

By Kaarel Moppel - If you read this blog post the new PostgreSQL version will be probably already officially released to the public for wider usage...but seems some eager DBA already installed the last week’s Release Candidate 1 and took it for a spin 😉 The “spin” though takes 3 days to run for my scripts, so that’s the reason I didn’t want to wait for the official release.

As this is an RC, and some things could change, etc, just a very brief posting this time with some synthetic pgbench test numbers that I got from my testing laid out for you and a mini conclusion in the end.

The source code statistics, by the way, again look very similar to last year's v12 release, with some very slight decrease in the number of changes done (which on the other hand could be just chunked into larger pieces) so one might assume that overall we would get a very stable release.

Test setup

Test hardware was my good old testing workstation: 4 CPU, 16GB RAM, SATA SSD
OS: A clean install of Ubuntu Server 18.04.5 LTS (Bionic Beaver),
Postgres: v12.4 and v13rc1 self-compiled, all default postgresql.conf parameters except a few “standard” tunings:

Note though that the last parameter “synchronous_commit” is not really a standard setting...but disabling it for testing generally makes sense as we’re not really interested in testing the IO syncing capabilities, but we rather want to move forward with our transactions to push through more “transactions per second” to possibly spot some algorithmic differences. Also, note that pgbench “scaling factor” was selected in a way that the generated dataset would fit mostly into memory for the same purpose.


All timings measured via ‘pg_stat_statements’, which should give the most accurate information.

  • Pgbench default with Foreign keys

1 transaction = 5 operations: 3x UPDATE, 1x INSERT, 1x SELECT, 4h runtime per PG version / scale pair, 2 concurrent clients.

TPS improvement for scale 200 (Shared buffers): 4662 vs 4660 ~ 0%
TPS improvement for scale 1200: 249.6 vs 247.9 ~ -0.7%

Timing info for the most costly operation: UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2

Scale Mean time (ms) Change % Stddev (ms) Change %
12.4 200 (Shared Buffers) 0.0291 3.47
13rc1 200 (Shared Buffers) 0.0275 -5.5 2.85 -17.8
12.4 1200 (75% in RAM) 7.515 19.38
13rc1 1200 (75% in RAM) 7.562 +0.6 20.93 +8.0


  • Pgbench read-only

1 transaction = 1 indexed SELECT, 4h runtime per PG version / scale pair, 4 concurrent clients.

Query: SELECT abalance FROM pgbench_accounts WHERE aid = $1

Scale Mean time (ms) Change % Stddev (ms) Change %
12.4 500 (fits RAM) 0.0085 0.0027
13rc1 500 (fits RAM) 0.0087 +2.6 0.0030 +11.6
12.4 1200 (75% in RAM) 0.0976 0.1043
13rc1 1200 (75% in RAM) 0.1020 +4.5 0.1093 +4.8


  • Typical analytical queries

These SELECT-s are some home-brewed queries that I threw together some years ago for the purpose of testing out freshly released v11 in a very similar matter. The queries are doing some bulkier aggregations that one sees quite often, on a larger subset or all of the data rows.

Scaling factor = 500 (data fits 100% in RAM), clients = 2

Query Mean time (ms) v12.4  Mean time v13rc1  Change % Stddev v12.4 (ms) Stddev v13rc1  Change %
Q 1 2384.2 2118.5 -11.1 314.05 365.22 +16.3
Q 2 12198 14552 +19.3 169.4 272.3 +60.8
Q 3 17584 14036 -20.2 1458.0 1636.1 +12.2
Q 4 4725.7 4527.4 -4.2 1099.8 1118.4 +1.7

First impression

So what do we think of the test numbers? Well… looking good in general - if we sum up all the percentual “winnings” for the mean operation times, then there was a 14% speedup!

By the way, 14% is more than observed in previous years actually, so I’m pretty excited about this release now. The only thing that’s holding me back from being even more excited is the fact that on the other hand, the sum of all percentual standard deviation changes increased quite noticeably by +97%! Though this increase mostly comes from our analytical query nr. 3, doing a “DISTINCT COUNT” which could be rewritten to be more efficient (but which most people of course don’t do...), it still seems to hint that there is some more jitter now in play with the improved algorithms. Or it just might be my test rig of course... so waiting to see some other people’s results in the near future also to see what they get.

So in the end - some test items were a bit slower, others faster... and most importantly it seems like there are no grave problems. Something serious like that would probably be reported by the project’s “test farm”, so not really worried about that though...

And in real life in the end it mostly comes down to how you access your data, and how well it fits into Shared Buffers or RAM. When moving beyond that we see orders of magnitude fall-offs, so any 10 or 20% algorithmic improvement will be powerless there anyways. But still, the v13 release will be great, with indexing improvements on the forefront - hope to write about that also soonish. And in the meantime, you all start preparing for that imminent version upgrade, mkay!


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

0 0 votes
Article Rating
Notify of
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments
Everson Clei
Everson Clei
3 years ago

Excellent post

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf

+43 (0) 2622 93022-0

Get the newest PostgreSQL Info & Tools

    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    CYBERTEC PostgreSQL International GmbH
    Would love your thoughts, please comment.x
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram