By Kaarel Moppel – If you read this blog post the new PostgreSQL version will be probably already officially released to the public for wider usage…but seems some eager DBA already installed the last week’s Release Candidate 1 and took it for a spin 😉 The “spin” though takes 3 days to run for my scripts, so that’s the reason I didn’t want to wait for the official release.

As this is an RC, and some things could change, etc, just a very brief posting this time with some synthetic pgbench test numbers that I got from my testing laid out for you and a mini conclusion in the end.

The source code statistics, by the way, again look very similar to last year’s v12 release, with some very slight decrease in the number of changes done (which on the other hand could be just chunked into larger pieces) so one might assume that overall we would get a very stable release.

git diff --shortstat REL_12_4..REL_13_RC1
3109 files changed, 257376 insertions(+), 198880 deletions(-)

git log --oneline REL_12_4..REL_13_RC1 | wc -l
2216

Test setup

Test hardware was my good old testing workstation: 4 CPU, 16GB RAM, SATA SSD
OS: A clean install of Ubuntu Server 18.04.5 LTS (Bionic Beaver),
Postgres: v12.4 and v13rc1 self-compiled, all default postgresql.conf parameters except a few “standard” tunings:

shared_buffers=4GB
logging_collector=on
shared_preload_libraries=’pg_stat_statements’
checkpoint_completion_target=0.9
synchronous_commit=off

Note though that the last parameter “synchronous_commit” is not really a standard setting…but disabling it for testing generally makes sense as we’re not really interested in testing the IO syncing capabilities, but we rather want to move forward with our transactions to push through more “transactions per second” to possibly spot some algorithmic differences. Also, note that pgbench “scaling factor” was selected in a way that the generated dataset would fit mostly into memory for the same purpose.

Results

All timings measured via ‘pg_stat_statements’, which should give the most accurate information.

  • Pgbench default with Foreign keys

1 transaction = 5 operations: 3x UPDATE, 1x INSERT, 1x SELECT, 4h runtime per PG version / scale pair, 2 concurrent clients.

TPS improvement for scale 200 (Shared buffers): 4662 vs 4660 ~ 0%
TPS improvement for scale 1200: 249.6 vs 247.9 ~ -0.7%

Timing info for the most costly operation: UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2

ScaleMean time (ms)Change %Stddev (ms)Change %
12.4200 (Shared Buffers)0.02913.47
13rc1200 (Shared Buffers)0.0275-5.52.85-17.8
12.41200 (75% in RAM)7.51519.38
13rc11200 (75% in RAM)7.562+0.620.93+8.0

 

  • Pgbench read-only

1 transaction = 1 indexed SELECT, 4h runtime per PG version / scale pair, 4 concurrent clients.

Query: SELECT abalance FROM pgbench_accounts WHERE aid = $1

ScaleMean time (ms)Change %Stddev (ms)Change %
12.4500 (fits RAM)0.00850.0027
13rc1500 (fits RAM)0.0087+2.60.0030+11.6
12.41200 (75% in RAM)0.09760.1043
13rc11200 (75% in RAM)0.1020+4.50.1093+4.8

 

  • Typical analytical queries

These SELECT-s are some home-brewed queries that I threw together some years ago for the purpose of testing out freshly released v11 in a very similar matter. The queries are doing some bulkier aggregations that one sees quite often, on a larger subset or all of the data rows.

Scaling factor = 500 (data fits 100% in RAM), clients = 2

/* 1st let’s create a copy of the accounts table to be able to test massive JOIN-s */
 
CREATE TABLE pgbench_accounts_copy AS SELECT * FROM pgbench_accounts;
 
CREATE UNIQUE INDEX ON pgbench_accounts_copy (aid);
 
/* Query 1 */
 
SELECT avg(abalance) FROM pgbench_accounts JOIN pgbench_branches USING (bid) WHERE bid % 2 = 0;
 
/* Query 2 */
 
SELECT COUNT(DISTINCT aid) FROM pgbench_accounts;
 
/* Query 3 */
 
SELECT count(*) FROM pgbench_accounts JOIN pgbench_accounts_copy USING (aid) WHERE aid % 2 = 0;
 
/* Query 4 */
 
SELECT sum(a.abalance) FROM pgbench_accounts a JOIN pgbench_accounts_copy USING (aid) WHERE a.bid % 10 = 0;

QueryMean time (ms) v12.4 Mean time v13rc1 Change %Stddev v12.4 (ms)Stddev v13rc1 Change %
Q 12384.22118.5-11.1314.05365.22+16.3
Q 21219814552+19.3169.4272.3+60.8
Q 31758414036-20.21458.01636.1+12.2
Q 44725.74527.4-4.21099.81118.4+1.7

First impression

So what do we think of the test numbers? Well… looking good in general – if we sum up all the percentual “winnings” for the mean operation times, then there was a 14% speedup!

By the way, 14% is more than observed in previous years actually, so I’m pretty excited about this release now. The only thing that’s holding me back from being even more excited is the fact that on the other hand, the sum of all percentual standard deviation changes increased quite noticeably by +97%! Though this increase mostly comes from our analytical query nr. 3, doing a “DISTINCT COUNT” which could be rewritten to be more efficient (but which most people of course don’t do…), it still seems to hint that there is some more jitter now in play with the improved algorithms. Or it just might be my test rig of course… so waiting to see some other people’s results in the near future also to see what they get.

So in the end – some test items were a bit slower, others faster… and most importantly it seems like there are no grave problems. Something serious like that would probably be reported by the project’s “test farm”, so not really worried about that though…

And in real life in the end it mostly comes down to how you access your data, and how well it fits into Shared Buffers or RAM. When moving beyond that we see orders of magnitude fall-offs, so any 10 or 20% algorithmic improvement will be powerless there anyways. But still, the v13 release will be great, with indexing improvements on the forefront – hope to write about that also soonish. And in the meantime, you all start preparing for that imminent version upgrade, mkay!

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.