© Laurenz Albe 2019
“Durability”, the D of ACID, demands that a committed database transaction remains committed, no matter what. For normal outages like a power failure, this is guaranteed by the transaction log (WAL). However, if we want to guarantee durability even in the face of more catastrophic outages that destroy the WAL, we need more advanced methods.
This article discusses how to use
pg_receivewal to maintain durability even under dire circumstances.
Archiving WAL with
The “traditional” method of archiving the transaction log is the
postgresql.conf. The DBA has to set this parameter to a command that archives a WAL segment after it is completed.
Popular methods include:
copyon Windows) to copy the file to network attached storage like NFS.
- Call a command like
rsyncto copy the file to a remote machine.
- Call an executable from your favorite PostgreSQL backup software
The important thing to consider is that the archived WAL segment is stored somewhere else than the database.
I have a redundant distributed storage system, do I still need to store WAL archives somewhere else?
Yes, because there is still a single point of failure: the file system.
If the file system becomes corrupted through a hardware or software problem, all the redundant distributed copies of your WAL archive can vanish or get corrupted.
If you believe that this is so unlikely that it borders on the paranoid: I have seen it happen.
A certain level of professional paranoia is a virtue in a DBA.
archive_command isn’t good enough
If your database server gets destroyed so that its disks are no longer available, we will still lose some committed transactions: the transactions in the currently active WAL segment. Remember that PostgreSQL archives a WAL segment usually when it is full. So up to 16MB worth of committed transactions can vanish with the active WAL segment.
To reduce the impact, you can set
archive_timeout: that will set the maximum time between WAL archivals. But for some applications, that just isn’t good enough: If you cannot afford to lose a single transaction even in the event of a catastrophe, WAL archiving just won’t do the trick.
pg_receivewal comes to the rescue
PostgreSQL 9.2 introduced
pg_receivexlog, which has been renamed to
pg_receivewal in v10. This client program will open a replication connection to PostgreSQL and stream WAL, just like streaming replication does. But instead of applying the information to a standby server, it writes it to disk. This way, it creates a copy of the WAL segments in real time. The partial WAL segment that
pg_receivewal is currently writing has the extension
.partial to distinguish it from completed WAL archives. Once the segment is complete,
pg_receivewal will rename it.
pg_receivewal is an alternative to WAL archiving that avoids the gap between the current and the archived WAL location. It is a bit more complicated to manage and monitor, because it is a separate process and should run on a different machine.
pg_receivewal and synchronous replication
By default, replication is asynchronous, so
pg_receivewal can still lose a split second’s worth of committed transactions in the case of a crash. If you cannot even afford that, you can switch to synchronous replication. That guarantees that not a single committed transaction can get lost, but it comes at a price:
- Since every commit requires a round trip to
pg_receivewal, it will take significantly longer. This has an impact on the number of writing transactions your system can support.
Keep the network latency low!
- If you have only a single synchronous standby server (
pg_receivewalacts as a standby), the availability of your system is reduced. This is because PostgreSQL won’t commit any more transactions if your only standby is unavailable.
To avoid that problem, you need at least two synchronous
Archive recovery and partial WAL segments
Now if the worst has happened and you need to recover, you’ll have to make sure to restore the partial WAL segments as well. In the simple case where you archive to an NFS mount, the
restore_command could be as simple as this:
restore_command = 'cp /walarchive/%f %p || cp /walarchive/%f.partial %p'
With careful design and a little effort, you can set up a PostgreSQL system that can never lose a single committed transaction even under the most dire circumstances. Integrate this with a high availability setup for maximum data protection.