CYBERTEC PostgreSQL Logo

Reduce WAL by increasing checkpoint distance

03.2020 / Category: / Tags: |

If you are running PostgreSQL in production you might have already noticed that adjusting checkpoints can be a very beneficial thing to tune the server and to improve database performance overall. However, there is more to this: Increasing checkpoint distances can also help to actually reduce the amount of WAL that is created in the first place. That is right. It is not just a performance issue, but it also has an impact on the volume of log created.

You want to know why? Let us take a closer look!

Creating a test installation

For my test I am using a normal PostgreSQL 12 installation. To do that I am running initdb by hand:

Then I have adjusted two parameters in postgresql.conf:

and restarted the server:

First of all I have reduced max_wal_size to make sure that PostgreSQL checkpoint a lot more often than this would be the case in the default installation. Performance will drop like a stone. The second parameter is to tell PostgreSQL to use asynchronous commits. This does not impact WAL generation but will simply speed up the benchmark to get results faster.

Finally I am using pgbench to create a test database containing 10 million rows (= 100 x 100.000)

Running a PostgreSQL benchmark

Let us run a test now: We want to run 10 x 1 million transactions. If you want to repeat the tests on your database make sure the test runs long enough. Let us take a look at the test!

As you can see we managed to run roughly 13.700 transactions per second (on my personal laptop). Let us take a look at the amount of WAL created:

The way to figure out the current WAL position is to call the pg_current_insert_lsn() function. To most people this format might be incomprehensible and therefore it is easier to transform the value into a readable number:

 The pg_size_pretty() function will return the number in a readily understandable format:

Increasing checkpoint distances

Let us stop the database, delete the data directory and initialize it again. This time my postgresql.conf parameters will be set to way higher values:

 Then we start the database again, populate it with data and run pgbench with the exact same parameters. Keep in mind that to repeat the process it is necessary to run a fixed number of transactions instead of running the test for a certain time. This way we can fairly compare the amount of WAL created.

Let us take a look and see how much WAL has been created this time:

Wow, only 5.3 GB of WAL. This is a major improvement but why did it happen?

Understanding WAL creation

The idea of the transaction log is to make sure that in case of a crash the system will be able to recover nicely. However, PostgreSQL cannot keep the transaction log around forever. The purpose of a checkpoint is ensuring that all data is safe in the data file before the WAL can be removed from disk.

Here comes the trick: If a block is modified after a checkpoint for the first time it has to be logged completely. The entire block will be written to the WAL. Every additional change does not require the entire block to be written – only the changes have to be logged which is, of course, a lot less.

This is the reason for the major difference we see here. By increasing the checkpoint distances to a really high value, we basically reduced checkpoints to an absolute minimum which in turn reduces the number of “first blocks” written.

Changing WAL settings is not just good for performance in the long run – it will also reduce the volume of WAL.

Further reading

If you want to know more about PostgreSQL and want to learn something about VACUUM check out my blog about this important topic.

Also, if you like our posts you might want to share them on LinkedIn or on Facebook. We are looking forward to your comments.

3 responses to “Reduce WAL by increasing checkpoint distance”

  1. Great article!

    I didn't know that you can calculate how much WAL data is generate by using the pg_current_wal_insert_lsn() function. This query:
    SELECT pg_size_pretty(pg_current_wal_insert_lsn() - '0/00000000'::pg_lsn);
    Looks like something that should be tracked by a monitoring system (its rate at least).

    Two small suggestions for simple improvements to the article:
    ⚫ Changing max_wal_size and synchronous_commit don't require a restart, just a reload will do the job.
    ⚫ It would be great if the article mentioned that setting max_wal_size = 150GB and checkpoint_timeout = 1d would mean that the checkpoint will be created after 150GB of WALs generated, or after 1 day, whichever comes first. This means in case of a PG crash it will need to read all 150GB (or 1 day) of WALs before it becomes ready to accept connections.

    • In case the funcion : pg_wal_lsn_diff() never came to your attention

      SELECT pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_insert_lsn(),'0/00000000')

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram