UPDATED September 2023: If you use transaction log shipping (or Point-In-Time-Recovery) in PostgreSQL to keep your data safe, you surely already know about pg_basebackup. The idea behind pg_basebackup is to allow users to create a binary copy of the data which can serve as the basis for Point-In-Time-Recovery. However, recently we saw some support cases related to improper use of pg_basebackup. So let’s clean out some of those myths surrounding this command and shed some light on how things really work. Note that the information in this blog is true for PostgreSQL versions 10 and higher. Everything said and described here therefore works perfectly for PostgreSQL 15, PostgreSQL 16 and higher.

What does pg_basebackup do?

Basically, pg_basebackup is a way to “tar” data over a database connection. When you start pg_basebackup, it will wait for the server to do a checkpoint, and then start to copy the Postgres data directory over. In most databases, a checkpoint is a fairly fundamental thing: if you want to know more, find out about PostgreSQL checkpoints in our blog.

Once the checkpoint is done, data will be copied over while the “donor” server (usually the master but possible for replicas also) is fully operational. Thousands of transactions (reads and writes) might happen while the base backup is running. What that means is that the data files you are about to copy over are incomplete, corrupted, and basically just not usable. Yes, you have read this correctly: We are copying data while it is modified and this will return broken, incomplete and wrong data files.

However, let us reflect a bit on these “errors”. The content of the base backup is something like a mixture of the data as it was at the beginning of the base backup and of the data as it was at the end of the base backup. Since there is a transaction log entry for every change, we can always repair any inconsistencies using the PostgreSQL transaction log. In other words: corrupted base backup + WAL = consistent data.

Making PostgreSQL backups self-contained and ready for use

By default, a base backup does not contain the WAL which was created while the base backup was created. In other words, unless NOTHING happened during the base backup (which is highly unlikely), the base backup might appear to work without a WAL archive. However, as stated before – it “MIGHT”. There is no guarantee. For example, an autovacuum process could start during the backup, which would change the data and cause inconsistency.

To make sure that the base backup contains enough WAL to reach at least a consistent state, I recommend adding --wal-method=stream when you call pg_basebackup. It will open a second stream which fetches the WAL created during the backup. The advantage is that the backup now has everything it needs for a consistent restore without an external WAL archive.
Here is my favorite method of running pg_basebackup:

pg_basebackup -D /target_dir \
   -h master.example.com --checkpoint=fast \
   --wal-method=stream -R

-R will automatically set the necessary parameters to run streaming replication. For Point-In-Time-Recovery, -R is neither necessary nor useful. Once this is done, we can already start the replica. Since the WAL was streamed along with the data, we’ll have a consistent database, which is guaranteed to be free of corruption.

Backup throttling in PostgreSQL

By default, pg_basebackup will operate at full speed. This can lead to trouble, because your network connection or your disk might run at peak capacity. The way to fix this problem is by reducing the speed of pg_basebackup in the first place:

 -r, --max-rate=RATE    maximum transfer rate to transfer data directory
                        (in kB/s, or use suffix "k" or "M")

Throttling your backups will of course increase your backup times, but in exchange, your network connections will have some spare capacity.

Read More Backup Blogs from CYBERTEC:


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.