Whenever rows in a PostgreSQL table are updated or deleted, dead rows are left behind.
VACUUM gets rid of them so that the space can be reused. If a table doesn’t get vacuumed, it will get bloated, which wastes disk space and slows down sequential table scans (and – to a smaller extent – index scans).
VACUUM also takes care of freezing table rows so to avoid problems when the transaction ID counter wraps around, but that’s a different story.
Normally you don’t have to take care of all that, because the autovacuum daemon built into PostgreSQL does that for you. To find out more about enabling and disabling autovacuum, read this post.
Problems with vacuum: bloated tables
In case your tables get bloated, the first thing you check is whether autovacuum has processed them or not:
SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum FROM pg_stat_all_tables ORDER BY n_dead_tup / (n_live_tup * current_setting('autovacuum_vacuum_scale_factor')::float8 + current_setting('autovacuum_vacuum_threshold')::float8) DESC LIMIT 10;
If your bloated table does not show up here,
n_dead_tup is zero and
last_autovacuum is NULL, you might have a problem with the statistics collector.
If the bloated table is right there on top, but
last_autovacuum is NULL, you might need to configure autovacuum to be more aggressive so that it gets done with the table.
But sometimes the result will look like this:
schemaname | relname | n_live_tup | n_dead_tup | last_autovacuum ------------+--------------+------------+------------+--------------------- laurenz | vacme | 50000 | 50000 | 2018-02-22 13:20:16 pg_catalog | pg_attribute | 42 | 165 | pg_catalog | pg_amop | 871 | 162 | pg_catalog | pg_class | 9 | 31 | pg_catalog | pg_type | 17 | 27 | pg_catalog | pg_index | 5 | 15 | pg_catalog | pg_depend | 9162 | 471 | pg_catalog | pg_trigger | 0 | 12 | pg_catalog | pg_proc | 183 | 16 | pg_catalog | pg_shdepend | 7 | 6 | (10 rows)
Here autovacuum has recently run, but it didn’t free the dead tuples!
We can verify the problem by running
test=> VACUUM (VERBOSE) vacme; INFO: vacuuming "laurenz.vacme" INFO: "vacme": found 0 removable, 100000 nonremovable row versions in 443 out of 443 pages DETAIL: 50000 dead row versions cannot be removed yet, oldest xmin: 22300 There were 0 unused item pointers. Skipped 0 pages due to buffer pins, 0 frozen pages. 0 pages are entirely empty. CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.
VACUUM remove the dead rows?
VACUUM can only remove those row versions (also known as “tuples”) that are not needed any more. A tuple is not needed if the transaction ID of the deleting transaction (as stored in the
xmax system column) is older than the oldest transaction still active in the PostgreSQL database. (Or, in the whole cluster for shared tables).
This value (22300 in the
VACUUM output above) is called the “xmin horizon”.
There are three things that can hold back this xmin horizon in a PostgreSQL cluster:
Long-running transactions and
You can find those and their
xminvalue with the following query:
SELECT pid, datname, usename, state, backend_xmin, backend_xid FROM pg_stat_activity WHERE backend_xmin IS NOT NULL OR backend_xid IS NOT NULL ORDER BY greatest(age(backend_xmin), age(backend_xid)) DESC;
You can use the
pg_terminate_backend()function to terminate the database session that is blocking your
Abandoned replication slots and
A replication slot is a data structure that keeps the PostgreSQL server from discarding information that is still needed by a standby server to catch up with the primary.
If replication is delayed or the standby server is down, the replication slot will prevent
VACUUMfrom deleting old rows.
You can find all replication slots and their
xminvalue with this query:
SELECT slot_name, slot_type, database, xmin FROM pg_replication_slots ORDER BY age(xmin) DESC;
pg_drop_replication_slot()function to drop replication slots that are no longer needed.
Note: This can only happen with physical replication if
hot_standby_feedback = on. For logical replication there is a similar hazard, but only system catalogs are affected. Examine the column
catalog_xminin that case.
Orphaned prepared transactions and
During two-phase commit, a distributed transaction is first prepared with the
PREPAREstatement and then committed with the
Once a transaction has been prepared, it is kept “hanging around” until it is committed or aborted. It even has to survive a server restart! Normally, transactions don’t remain in the prepared state for long, but sometimes things go wrong and a prepared transaction has to be removed manually by an administrator.
You can find all prepared transactions and their
xminvalue with the following query:
SELECT gid, prepared, owner, database, transaction AS xmin FROM pg_prepared_xacts ORDER BY age(transaction) DESC;
ROLLBACK PREPAREDSQL statement to remove prepared transactions.
Standby server with
hot_standby_feedback = onand
Normally, the primary server in a streaming replication setup does not care about queries running on the standby server. Thus,
VACUUMwill happily remove dead rows which may still be needed by a long-running query on the standby, which can lead to replication conflicts. To reduce replication conflicts, you can set
hot_standby_feedback = onon the standby server. Then the standby will keep the primary informed about the oldest open transaction, and
VACUUMon the primary will not remove old row versions still needed on the standby.
To find out the
xminof all standby servers, you can run the following query on the primary server:
SELECT application_name, client_addr, backend_xmin FROM pg_stat_replication ORDER BY age(backend_xmin) DESC;
Read more about PostgreSQL table bloat and autocommit in my post here.