CYBERTEC PostgreSQL Logo

Some weeks ago the Beta 1 of upcoming Postgres 10 was released. Besides the very visible version numbering scheme change (no more X.Y.Z ) the release notes promised amongst other very cool and long-awaited features - like built-in Logical Replication and Table Partitioning - also many performance improvements, especially in the area of analytics. Release notes stated up to +40% for cases, with large number of rows (which could mean different things to different people of course), with an added note to test it out and report back. So now I found time to exactly do that and I'm again just laying out the numbers on running some analytical queries for you to evaluate.

Test Queries for a better performance

Postgres has excellent analytical support, so making a choice on what to test exactly without going through the v10 Git changelog in details caused me halt for a moment, but I thought I'll keep it simple this time (will hopefully go deeper for the final Release Candidate) and I conjured up 3 quite simple SELECT queries based on the hints from release notes and on the schema generated by our good old friend pgbench.

For pbench "scale" I chose 100, meaning 10 Million rows (1.3GB) will be populated into the pgbench_accounts table. This gives enough rows but still fits into memory so that runtimes are more predictable.

Hardware & Server Config

I spun up a modest c3.xlarge (4 cores, Intel Xeon E5-2680 v2 @ 2.8GHz, 7.5GB RAM, RHEL 7.3) instance on AWS, but as we're not testing hardware it should be fine. The disk doesn't really matter here, as it's a read-only test and everything will fit into RAM. For additional testing fairness, my test script also performed queries in turns between 9.6 and 10 Beta to try to suppress the effects of possible background load jitter of a shared virtual machine.

For running Postgres, I used official Postgres packages from the YUM repo for both 9.6.3 and 10 Beta1, just changing the port to 5433 for 10 Beta.
Concerning server settings I left everything to defaults, except below changes on both clusters for reasons added as comments.

Results

After running my test script (1h of analytical queries for both clusters), which basically does all the work out of the box and can be found here, I got the below numbers. For generating the percentage differences btw, you could use this query here.

QueryInstanceMean time (s)% ChangeStddev time (s)% Change
SUM9.69.74 3.86 
SUM10beta19.28-5.02.05-87.9
JOIN9.65.05 2.13 
JOIN10beta15.98+15.51.53-39.0
CUBE9.610.35 1.53 
CUBE10beta18.75-18.31.9321.0

So what do these numbers tell us? 2 queries out of 3 have improved – that's good of course. Any significant changes? Not so many. Only for the grouping sets query. But the advertised unique join feature improvement definitely did not show out, the opposite sadly. For consolation - at least it's more predictably slow – standard deviation fell by 40% 🙂

But to sum it up – the general impression is still positive as aggregate runtimes over all queries still improved by 5%. One could think that 5% is not much – but the fact is that Postgres has matured over decades and is already using very good algorithms and big improvements can come only from venturing into the "parallel worlds".

NB! Not to forget - it's a BETA release and these numbers don't mean too much in the long run.

A drop of tar on another front

After I was done with the analytical queries, I also got hungry for more and thought I'll check out how do normal read queries perform? So I also ran a quick "pgbench –select-only" (single index scans) tests for different client/scale settings... and actually noticed that on average the 10 Beta was consistently a couple of percent slower there when measured in TPS! Not a big difference of course, given that we're still dealing with beta software here... but it still made me a bit sceptical still. So if anyone has time, please test it out yourself (for example using my script here) and see if this was some setup glitch from my side or something else. Or just comment what you think of this test. If you have any questions, you can also contact us directly.

We have written about our hatchling Open Source PostgreSQL monitoring tool called pgwatch2 some months ago here and here but now I think it deserves another nudge, as a lot of new features have found their way into the code.

For our new readers - pgwatch2 is a simple but flexible Postgres monitoring tool, aiming to provide a nice balance between features / usability / out-of-the-box experience, concentrating on nice graphs using Grafana. Getting started with pgwatch2 by the way couldn't be any easier and takes just minutes thanks to Docker - please check the links from above for more details.

So, find a listing of those new features below. Please feel free to check out the new version and let us know on GitHub if you're still missing something in the tool or are having difficulties with something! Thanks a lot!

Project GitHub link - here

List of changes

Support for Graphite as metric storing database

Previously, only InfluxDB was supported as metrics storing datastore - now when running the pgwatch2 daemon it is possible to specify that metrics should be stored also in Graphite. Use --datastore / --graphite-host / --graphite-port parameters for that. Metrics will be stored in the "pgwatch2" namespace then. Sadly there are no out of the box dashboards for Graphite though, so you need to compose them manually, based on Influx ones...

SSL support for Grafana and the Web UI

Previously one could really recommend out-of-the-box image only for local / LAN / datacenter usage, but now with SSL support one can expose both the Web UI (configure the gathered metrics and hosts) and Grafana (graphs on gathere metrics) also over the Internet, given you'll set some strong passwords of course. New self-signed (aka snake oil) SSL cert will generated on every container startup when the below flags are set, so it should be relatively OK.

Support for the upcoming PostgreSQL version 10

New SQL text versions for some metrics. Has mostly to do with "*xlog*" functions renamed to "*wal*".

Support for beta/devel versions of Postgres as well as EDB Postgres Advanced Server

This had actually to do only with reading/transforming the Postgres server version number universally.

New "change detection" feature and according dashboard

When the new "change_events" metric is enabled, the pgwatch2 daemon will track the signature of tables/views/indexes, stored procedure code and values of server configuration parameters and will log an "change event" when any changes detected. These events graphed together with the "CPU load" and "Rollback ratio" graphs should help to quickly pinpoint problems arising from new schema rollouts or config changes by the DBAs.

pgwatch2_change_detected_dash

The daemon is now complied inside the Docker. This increases of course the image size but makes experimenting and small behaviour adjustments easier.

Histograms and heatmaps now possible! See demo here.

More info on backup/restore and metrics setup.

 

pgwatch2 is constantly being improved and new features are added. Learn more.

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2025
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram