Merry xmax!
Based on an image by Robo Android under the Creative Commons Attribution 2.0 license

 

xmax is a PostgreSQL system column that is used to implement Multiversion Concurrency Control (MVCC). The documentation is somewhat terse:

The identity (transaction ID) of the deleting transaction, or zero for an undeleted row version. It is possible for this column to be nonzero in a visible row version. That usually indicates that the deleting transaction hasn’t committed yet, or that an attempted deletion was rolled back.

While this is true, the presence of “weasel words” like “usually” indicates that there is more to the picture. This is what I want to explore in this article.

The two meanings of xmax

I’ll follow the PostgreSQL convention to use the word “tuple” for “row version” (remember that PostgreSQL implements MVCC by holding several versions of a row in the table).

xmax is actually used for two purposes in PostgreSQL:

  • It stores the transaction ID (“xid”) of the transaction that deleted the tuple, like the documentation says. Remember that UPDATE also deletes a tuple in PostgreSQL!
  • It stores row locks on the tuple.

This is possible, because a tuple cannot be locked and deleted at the same time: normal locks are only held for the duration of the transaction, and a tuple is deleted only after the deleting transaction has committed.

Storing row locks on the tuple itself has one vast advantage: it avoids overflows of the “lock table”. The lock table is a fixed-size area that is allocated in shared memory during server startup and could easily be too small to hold all the row locks from a bigger transaction. To cope with this, you’d need techniques like “lock escalation” that are difficult to implement, impact concurrency and lead to all kinds of nasty problems.

There is also a down side to storing row locks in the tuple: each row lock modifies the table, and the modified blocks have to be written back to persistent storage. This means that row locks lead to increased I/O load.

But a number of questions remain:

  • How can you tell which of the two meanings xmax has in a tuple?
  • How can I tell if xmax is valid or not?
  • How exactly are row locks stored?

We will dive deeper in the rest of this article to answer these questions.

An example

In the following, I’ll use a simple schema for demonstration. I am using PostgreSQL v10, but this hasn’t changed in the last couple of releases.

CREATE TABLE parent(
   p_id integer PRIMARY KEY,
   p_val text
);

CREATE TABLE child(
   c_id integer PRIMARY KEY,
   p_id integer REFERENCES parent(p_id),
   c_val text
);

INSERT INTO parent (p_id, p_val)
   VALUES (42, 'parent');

Now let’s look at the relevant system columns:

session1=# SELECT ctid, xmin, xmax, p_id, p_val FROM parent;

 ctid  | xmin  | xmax | p_id | p_val  
-------+-------+------+------+--------
 (0,1) | 53163 |    0 |   42 | parent
(1 row)

This is the simple view we expect to see: ctid is the physical location of the tuple (Block 0, item 1), xmin contains the ID of the inserting transaction, and xmax is zero because the row is alive.

Now let’s start a transaction in session 1 and delete the row:

session1=# BEGIN;
session1=# DELETE FROM parent WHERE p_id = 42;

Then session 2 can see that xmax has changed:

session2=# SELECT ctid, xmin, xmax, p_id, p_val FROM parent;

 ctid  | xmin  | xmax  | p_id | p_val  
-------+-------+-------+------+--------
 (0,1) | 53163 | 53165 |   42 | parent
(1 row)

But wait, we change our mind in session 1 and undo the change:

session1=# ROLLBACK;

To find out what xmax means in this case, let’s call in the cavalry.

pageinspect comes to the rescue

PostgreSQL comes with a “contrib” module called pageinspect that can be used to examine the actual contents of table blocks. It is installed with

CREATE EXTENSION pageinspect;

We’ll use two of its functions:

  • get_raw_page: reads one 8kB block from the table’s data file
  • heap_page_item_attrs: for each tuple in a data block, this returns the tuple metadata and data

Needless to say, these functions are superuser only.

heap_page_item_attrs returns an integer field named t_infomask that contains several flags, some of which tell us the meaning of xmax. To get the full story, you’ll have to read the code in src/include/access/htup_details.h.

Let’s have a look at table block 0, which contains our tuple:

session2=# SELECT lp, 
       t_ctid AS ctid,
       t_xmin AS xmin,
       t_xmax AS xmax,
       (t_infomask & 128)::boolean AS xmax_is_lock,
       (t_infomask & 1024)::boolean AS xmax_committed,
       (t_infomask & 2048)::boolean AS xmax_rolled_back,
       (t_infomask & 4096)::boolean AS xmax_multixact,
       t_attrs[1] AS p_id,
       t_attrs[2] AS p_val
FROM heap_page_item_attrs(
        get_raw_page('parent', 0), 
        'parent'
     );  

-[ RECORD 1 ]----+-----------------
lp               | 1
ctid             | (0,1)
xmin             | 53163
xmax             | 53165
xmax_is_lock     | f
xmax_committed   | f
xmax_rolled_back | f
xmax_multixact   | f
p_id             | \x2a000000
p_val            | \x0f706172656e74

The attributes p_id and p_val are displayed in binary form.

The information in the tuple doesn’t tell us whether the transaction that set xmax has been committed or rolled back, so we (and PostgreSQL when it inspects the tuple) still don’t know what to make of xmax. That is because PostgreSQL does not update the tuple when a transaction ends.

To resolve that uncertainty, we’d have to look at the commit log that stores the state of each transaction. The commit log is persisted in the pg_xact subdirectory of the PostgreSQL data directory (pg_clog in older versions).

A SELECT that modifies data

We cannot examine the commit log from SQL, but when any database transaction reads the tuple and looks up the commit log, it will persist the result in the tuple so that the next reader does not have to do it again (this is called “setting the hint bits”).

So all we have to do is to read the tuple:

session2=# SELECT ctid, xmin, xmax, p_id, p_val FROM parent;

 ctid  | xmin  | xmax  | p_id | p_val  
-------+-------+-------+------+--------
 (0,1) | 53163 | 53165 |   42 | parent
(1 row)

This changes the information stored in the tuple. Let’s have another look with pageinspect:

-[ RECORD 1 ]----+-----------------
lp               | 1
ctid             | (0,1)
xmin             | 53163
xmax             | 53165
xmax_is_lock     | f
xmax_committed   | f
xmax_rolled_back | t
xmax_multixact   | f
p_id             | \x2a000000
p_val            | \x0f706172656e74

The SELECT statement has set the flags on the tuple, and now we can see that xmax is from a transaction that was rolled back and should be ignored.

As an aside, that means that the first reader of a tuple modifies the tuple, causing surprising write I/O. This is annoying, but it is the price we pay for instant COMMIT and ROLLBACK. It is also the reason why it is a good idea to either use COPY … (FREEZE) to bulk load data or to VACUUM the data after loading.

Now we know how to determine if xmax is from a valid transaction or not, but what about row locks?

Row locks and xmax

Rows are locked by data modifying statements, but there is a simple way to lock a row without inserting or deleting tuples:

session1=# BEGIN;
session1=# SELECT * FROM parent WHERE p_id = 42 FOR UPDATE;

 p_id | p_val  
------+--------
   42 | parent
(1 row)

Now what does pageinspect tell us?

-[ RECORD 1 ]----+-----------------
lp               | 1
ctid             | (0,1)
xmin             | 53163
xmax             | 53166
xmax_is_lock     | t
xmax_committed   | f
xmax_rolled_back | f
xmax_multixact   | f
p_id             | \x2a000000
p_val            | \x0f706172656e74

We see that the row is locked. In this case, it is a FOR UPDATE lock, but the query does not distinguish between the lock modes for simplicity’s sake. You’ll notice that xmax again is neither committed nor rolled back, but we don’t care because we know it is a row lock.

xmax is set to 53166, which is the transaction ID of the locking transaction. Let’s close that transaction to continue:

session1=# COMMIT;

PostgreSQL does not have to set hint bits here — if xmax contains a row lock, the row is active, no matter what the state of the locking transaction is.

If you think you have seen it all, you are in for a surprise.

Multiple locks on a single row

In the previous example we have seen that PostgreSQL stores the transaction ID of the locking transaction in xmax. This works fine as long as only a single transaction holds a lock on that tuple. With exclusive locks like the one that SELECT … FOR UPDATE takes, this is always the case.

But PostgreSQL also knows other row locks, for example the FOR KEY SHARE lock that is taken on the destination of a foreign key constraint to prevent concurrent modification of the keys in that row. Let’s insert some rows in the child table:

session1=# BEGIN;
session1=# INSERT INTO child (c_id, p_id, c_val)
   VALUES (1, 42, 'first');

session2=# BEGIN;
session2=# INSERT INTO child (c_id, p_id, c_val)
   VALUES (2, 42, 'second');

Now let’s look at our parent row again:

-[ RECORD 1 ]----+-----------------
lp               | 1
ctid             | (0,1)
xmin             | 53163
xmax             | 3
xmax_is_lock     | t
xmax_committed   | f
xmax_rolled_back | f
xmax_multixact   | t
p_id             | \x2a000000
p_val            | \x0f706172656e74

That “3” in xmax cannot be a transaction ID (they keep counting up), and the xmax_multixact flag is set.

This is the ID of a “multiple transaction object”, called “mulitxact” in PostgreSQL jargon for lack of a better word. Such objects are created whenever more than one transaction locks a row, and their IDs are also counted up (you can tell that this database needs few of them). Multixacts are persisted in the pg_multixact subdirectory of the data directory.

You can get information about the members of a multixact with the undocumented pg_get_multixact_members function:

session2=# SELECT * FROM pg_get_multixact_members('3');
  xid  | mode  
-------+-------
 53167 | keysh
 53168 | keysh
(2 rows)

Now you really know what is in an xmax!

Another way to get rid of unused indexes...
© Laurenz Albe 2018

Why should I get rid of unused indexes?

Everybody knows that a database index is a good thing because it can speed up SQL queries. But this does not come for free.

The disadvantages of indexes are:

  • Indexes use up space. It is not unusual for database indexes to use as much storage space as the data themselves. And the kind of reliable, fast storage you want for a database is not necessarily cheap.
    The space used up by indexes also increases the size and duration of physical backups.
  • Indexes slow down data modification. Whenever you INSERT into or DELETE from a table, all indexes have to be modified, in addition to the table itself (the “heap”).
    And it is much more expensive to modify the complicated data structure of an index than the heap itself, which has its name precisely because it is basically an unordered “pile” of data (and as everybody knows, maintaining order is more work than having a mess). Modifying an indexed table can easily be an order of magnitude more expensive than modifying an unindexed table.
  • Indexes prevent HOT updates. Because of the architecture of PostgreSQL, every UPDATE causes a new row version (“tuple”) to be written, and that causes a new entry in every index on the table.
    This behavior has been dubbed “write amplification” and has drawn a lot of fire. This undesirable effect can be avoided if a) the new tuple fits into the same table block as the old one and b) no indexed row is modified. Then PostgreSQL creates the new tuple as a “Heap Only Tuple” (hence HOT), which is much more efficient and also reduces the work VACUUM has to do.

The many uses of indexes

Now we know that we don’t want unnecessary indexes. The problem is that indexes serve so many purposes that it is difficult to determine if a certain index is needed or not.

Here is a list of all benefits of indexes in PostgreSQL:

  1. Indexes can speed up queries that use indexed columns (or expressions) in the WHERE clause.
    Everybody knows that one!
    The traditional B-tree index supports the <, <=, =, >= and > operators, while the many other index types in PostgreSQL can support more exotic operators like “overlaps” (for ranges or geometries), “distance” (for words) or regular expression matches.
  2. B-tree indexes can speed up the max() and min() aggregates.
  3. B-tree indexes can speed up ORDER BY clauses.
  4. Indexes can speed up joins. This depends on the “join strategy” chosen by the optimizer: hash joins, for example, will never make use of an index.
  5. A B-tree index on the origin of a FOREIGN KEY constraint avoids a sequential scan when rows are deleted (or keys modified) in the target table. A scan on the origin of the constraint is necessary to make sure that the constraint will not be not violated by the modification.
  6. Indexes are used to enforce constraints. Unique B-tree indexes are used to enforce PRIMARY KEY and UNIQUE constraints, while exclusion constraints use GiST indexes.
  7. Indexes can provide the optimizer with better value distribution statistics.
    If you create an index on an expression, ANALYZE and the autoanalyze daemon will not only collect statistics for the data distribution in table columns, but also for each expression that occurs in an index. This helps the optimizer to get a good estimate for the “selectivity” of complicated conditions that contain the indexed expression, which causes better plans to be chosen.This is a widely ignored benefit of indexes!

Find the unused indexes!

The following query that we at Cybertec use will show you all indexes that serve none of the above mentioned purposes.

It makes use of the fact that all uses of indexes in the above list with the exception of the last two result in an index scan.

For completeness’ sake, I have to add that the parameter track_counts has to remain “on” for the query to work, otherwise index usage is not tracked in pg_stat_user_indexes. But you must not change that parameter anyway, otherwise autovacuum will stop working.

To find the indexes that have never been used since the last statistics reset with pg_stat_reset(), use

SELECT s.schemaname,
       s.relname AS tablename,
       s.indexrelname AS indexname,
       pg_relation_size(s.indexrelid) AS index_size
FROM pg_catalog.pg_stat_user_indexes s
   JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0      -- has never been scanned
  AND 0 <>ALL (i.indkey)  -- no index column is an expression
  AND NOT EXISTS          -- does not enforce a constraint
         (SELECT 1 FROM pg_catalog.pg_constraint c
          WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC;

Some remarks:

  • Don’t do that on your test database, but on the production database!
  • If your software it running at several customer sites, run the query on all of them.
    Different users have different ways to use a software, which can cause different indexes to be used.
  • You can replace s.idx_scan = 0 in the query with a different condition, e.g. s.idx_scan < 10. Indexes that are very rarely used are also good candidates for removal.

Why VACUUM?

Unlock the tremendous energy of the vacuum!
© xkcd.xom (Randall Munroe) under the Creative Commons Attribution-NonCommercial 2.5 License

Whenever rows in a PostgreSQL table are updated or deleted, dead rows are left behind. VACUUM gets rid of them so that the space can be reused. If a table doesn’t get vacuumed, it will get bloated, which wastes disk space and slows down sequential table scans (and – to a smaller extent – index scans).

VACUUM also takes care of freezing table rows so to avoid problems when the transaction ID counter wraps around, but that’s a different story.

Normally you don’t have to take care of all that, because the autovacuum daemon built into PostgreSQL does that for you.

The problem

If your tables get bloated, the first thing you check is whether autovacuum has processed them or not:

SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum
FROM pg_stat_all_tables
ORDER BY n_dead_tup
    / (n_live_tup
       * current_setting('autovacuum_vacuum_scale_factor')::float8
          + current_setting('autovacuum_vacuum_threshold')::float8)
     DESC
LIMIT 10;

If your bloated table does not show up here, n_dead_tup is zero and last_autovacuum is NULL, you might have a problem with the statistics collector.

If the bloated table is right there on top, but last_autovacuum is NULL, you might need to configure autovacuum to be more aggressive so that it gets done with the table.

But sometimes the result will look like this:

 schemaname |    relname   | n_live_tup | n_dead_tup |   last_autovacuum
------------+--------------+------------+------------+---------------------
 laurenz    | vacme        |      50000 |      50000 | 2018-02-22 13:20:16
 pg_catalog | pg_attribute |         42 |        165 |
 pg_catalog | pg_amop      |        871 |        162 |
 pg_catalog | pg_class     |          9 |         31 |
 pg_catalog | pg_type      |         17 |         27 |
 pg_catalog | pg_index     |          5 |         15 |
 pg_catalog | pg_depend    |       9162 |        471 |
 pg_catalog | pg_trigger   |          0 |         12 |
 pg_catalog | pg_proc      |        183 |         16 |
 pg_catalog | pg_shdepend  |          7 |          6 |
(10 rows)

Here autovacuum has recently run, but it didn’t free the dead tuples!

We can verify the problem by running VACUUM (VERBOSE):

test=> VACUUM (VERBOSE) vacme;
INFO:  vacuuming "laurenz.vacme"
INFO:  "vacme": found 0 removable, 100000 nonremovable row versions in
       443 out of 443 pages
DETAIL:  50000 dead row versions cannot be removed yet,
         oldest xmin: 22300
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.

Why won’t VACUUM remove the dead rows?

VACUUM can only remove those row versions (also known as “tuples”) that are not needed any more. A tuple is not needed if the transaction ID of the deleting transaction (as stored in the xmax system column) is older than the oldest transaction still active in the PostgreSQL database (or the whole cluster for shared tables).

This value (22300 in the VACUUM output above) is called the “xmin horizon”.

There are three things that can hold back this xmin horizon in a PostgreSQL cluster:

  1. Long-running transactions:

    You can find those and their xmin value with the following query:

    SELECT pid, datname, usename, state, backend_xmin
    FROM pg_stat_activity
    WHERE backend_xmin IS NOT NULL
    ORDER BY age(backend_xmin) DESC;
    

    You can use the pg_terminate_backend() function to terminate the database session that is blocking your VACUUM.

  2. Abandoned replication slots:

    A replication slot is a data structure that keeps the PostgreSQL server from discarding information that is still needed by a standby server to catch up with the primary.

    If replication is delayed or the standby server is down, the replication slot will prevent VACUUM from deleting old rows.

    You can find all replication slots and their xmin value with this query:

    SELECT slot_name, slot_type, database, xmin
    FROM pg_replication_slots
    ORDER BY age(xmin) DESC;
    

    Use the pg_drop_replication_slot() function to drop replication slots that are no longer needed.

    Note: This can only happen with physical replication if hot_standby_feedback = on. For logical replication there is a similar hazard, but only system catalogs are affected. Examine the column catalog_xmin in that case.

  3. Orphaned prepared transactions:

    During two-phase commit, a distributed transaction is first prepared with the PREPARE statement and then committed with the COMMIT PREPARED statement.

    Once a transaction has been prepared, it is kept “hanging around” until it is committed or aborted. It even has to survive a server restart! Normally, transactions don’t remain in the prepared state for long, but sometimes things go wrong and a prepared transaction has to be removed manually by an administrator.

    You can find all prepared transactions and their xmin value with the following query:

    SELECT gid, prepared, owner, database, transaction AS xmin
    FROM pg_prepared_xacts
    ORDER BY age(transaction) DESC;
    

    Use the ROLLBACK PREPARED SQL statement to remove prepared transactions.

About sequences

Sequences are used to generate artificial numeric primary key columns for tables.
A sequence provides a “new ID” that is guaranteed to be unique, even if many database sessions are using the sequence at the same time.

Sequences are not transaction safe, because they are not supposed to block the caller. That is not a shortcoming, but intentional.

As a consequence, a transaction that requests a new value from the sequence and then rolls back will leave a “gap” in the values committed to the database. In the rare case that you really need a “gap-less” series of values, a sequence is not the right solution for you.

PostgreSQL’s traditional way of using sequences (nextval('my_seq')) differs from the SQL standard, which uses NEXT VALUE FOR <sequence generator name>.

New developments in PostgreSQL v10

Identity columns

PostgreSQL v10 has introduced the standard SQL way of defining a table with an automatically generated unique value:

GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ]

Here is an example:

CREATE TABLE my_tab (
   id bigint GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
   ...
);

Behind the scenes, this uses a sequence, and it is roughly equivalent to the traditional

CREATE TABLE my_tab (
   id bigserial PRIMARY KEY,
   ...
);

which is a shorthand for

CREATE SEQUENCE my_tab_id_seq;

CREATE TABLE my_tab (
   id bigint PRIMARY KEY DEFAULT nextval('my_tab_id_seq'::regclass),
   ...
);

ALTER SEQUENCE my_tab_id_seq OWNED BY my_tab.id;

The problem with such a primary key column is that the generated value is a default value, so if the user explicitly inserts a different value into this column, it will override the generated one.

This is usually not what you want, because it will lead to a constraint violation error as soon as the sequence counter reaches the same value. Rather, you want the explicit insertion to fail, since it is probably a mistake.

For this you use GENERATED ALWAYS:

CREATE TABLE my_tab (
   id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
   ...
);

You can still override the generated value, but you’ll have to use the OVERRIDING SYSTEM VALUE clause for that, which makes it much harder for such an INSERT to happen by mistake:

INSERT INTO my_tab (id) OVERRIDING SYSTEM VALUE VALUES (42);

New system catalog pg_sequence

Before PostgreSQL v10, a sequence’s metadata (starting value, increment and others) were stored in the sequence itself.

This information is now stored in a new catalog table pg_sequence.

The only data that remain in the sequence are the data changed by the sequence manipulation functions nextval, currval, lastval and setval.

Transactional DDL for sequences

A sequence in PostgreSQL is a “special table” with a single row.

In “normal tables”, an UPDATE does not modify the existing row, but writes a new version of it and marks the old version as obsolete. Since sequence operations should be fast and are never rolled back, PostgreSQL can be more efficient by just modifying the single row of a sequence in place whenever its values change.

Since prior to PostgreSQL v10 all metadata of a sequence were kept in the sequence (as explained in the previous section), this had the down side that ALTER SEQUENCE, which also modified the single row of a sequence, could not be rolled back.

Since PostgreSQL v10 has given us pg_sequence, and catalog modifications are transaction safe in PostgreSQL, this limitation could be removed with the latest release.

Performance regression with ALTER SEQUENCE

When I said above that ALTER SEQUENCE has become transaction safe just by introducing a new catalog table, I cheated a little. There is one variant of ALTER SEQUENCE that modifies the values stored in a sequence:

ALTER SEQUENCE my_tab_id_seq RESTART;

If only some variants of ALTER SEQUENCE were transaction safe and others weren’t, this would lead to surprising and buggy behavior.

That problem was fixed with this commit:

commit 3d79013b970d4cc336c06eb77ed526b44308c03e
Author: Andres Freund <andres@anarazel.de>
Date:   Wed May 31 16:39:27 2017 -0700

    Make ALTER SEQUENCE, including RESTART, fully transactional.
    
    Previously the changes to the "data" part of the sequence, i.e. the
    one containing the current value, were not transactional, whereas the
    definition, including minimum and maximum value were.  That leads to
    odd behaviour if a schema change is rolled back, with the potential
    that out-of-bound sequence values can be returned.
    
    To avoid the issue create a new relfilenode fork whenever ALTER
    SEQUENCE is executed, similar to how TRUNCATE ... RESTART IDENTITY
    already is already handled.
    
    This commit also makes ALTER SEQUENCE RESTART transactional, as it
    seems to be too confusing to have some forms of ALTER SEQUENCE behave
    transactionally, some forms not.  This way setval() and nextval() are
    not transactional, but DDL is, which seems to make sense.
    
    This commit also rolls back parts of the changes made in 3d092fe540
    and f8dc1985f as they're now not needed anymore.
    
    Author: Andres Freund
    Discussion: https://postgr.es/m/20170522154227.nvafbsm62sjpbxvd@alap3.anarazel.de
    Backpatch: Bug is in master/v10 only

This means that every ALTER SEQUENCE statement will now create a new data file for the sequence; the old one gets deleted during COMMIT. This is similar to the way TRUNCATE, CLUSTER, VACUUM (FULL) and some ALTER TABLE statements are implemented.

Of course this makes ALTER SEQUENCE much slower in PostgreSQL v10 than in previous releases, but you can expect this statement to be rare enough that it should not cause a performance problem.

However, there is this old blog post by depesz that recommends the following function to efficiently get a gap-less block of sequence values:

CREATE OR REPLACE FUNCTION multi_nextval(
   use_seqname text,
   use_increment integer
) RETURNS bigint AS $$
DECLARE
   reply bigint;
BEGIN
   PERFORM pg_advisory_lock(123);
   EXECUTE 'ALTER SEQUENCE ' || quote_ident(use_seqname)
           || ' INCREMENT BY ' || use_increment::text;
   reply := nextval(use_seqname);
   EXECUTE 'ALTER SEQUENCE ' || quote_ident(use_seqname)
           || ' INCREMENT BY 1';
   PERFORM pg_advisory_unlock(123);
   RETURN reply;
END;
$$ LANGUAGE 'plpgsql';

This function returns the last value of the gap-less sequence value block (and does not work correctly when called on a newly created sequence).

Since this function calls ALTER SEQUENCE not only once but twice, you can imagine that every application that uses it a lot will experience quite a performance hit when upgrading to PostgreSQL v10.

Fortunately you can achieve the same thing with the normal sequence manipulation functions, so you can have a version of the function that will continue performing well in PostgreSQL v10:

CREATE OR REPLACE FUNCTION multi_nextval(
   use_seqname regclass,
   use_increment integer
) RETURNS bigint AS $$
DECLARE
   reply bigint;
   lock_id bigint := (use_seqname::bigint - 2147483648)::integer;
BEGIN
   PERFORM pg_advisory_lock(lock_id);
   reply := nextval(use_seqname);
   PERFORM setval(use_seqname, reply + use_increment - 1, TRUE);
   PERFORM pg_advisory_unlock(lock_id);
   RETURN reply + increment - 1;
END;
$$ LANGUAGE plpgsql;

If you want to get the first value of the sequence value block, use RETURN reply;

Note that both the original function and the improved one use advisory locks, so they will only work reliably if the sequence is only used with that function.

I recently had an interesting support case that shows how the cause of a problem can sometimes be where you would least suspect it.

About table bloat

After an UPDATE or DELETE, PostgreSQL keeps old versions of a table row around. This way, concurrent sessions that want to read the row don’t have to wait. But eventually this “garbage” will have to be cleaned up. That is the task of the autovacuum daemon.

Usually you don’t have to worry about that, but sometimes something goes wrong. Then old row versions don’t get deleted, and the table keeps growing. Apart from the wasted storage space, this will also slow down sequential scans and – to some extent – index scans.

To get rid of the bloat, you can use VACUUM (FULL) and other tools like pg_squeeze. But it is important to find and fix the cause of the table bloat so that it does not reappear.

The problem

I got called by a customer who experienced table bloat in the pg_attribute system catalog table that contains the table column metadata.

This can happen if table columns get modified or dropped frequently. Most of the time, these are temporary tables which are automatically dropped when the session or transaction ends.

The customer made heavy use of temporary tables. But they have several Linux machines with databases that experience the same workload, and only some of those had the problem.

Searching the cause

I went through the list of common causes for table bloat:

  • Database transactions that remain open (state “idle in transaction”).
    These will keep autovacuum from cleaning up row versions that have become obsolete after the start of the transaction.
  • A rate of data modification that is so high that autovacuum cannot keep up.
    In this case, the correct answer is to make autovacuum more aggressive.

Both were not the case; the second option could be ruled out because that would cause bloat on all machines and not only on some.

Then I had a look at the usage statistics for the affected table:

dbname=> SELECT * FROM pg_stat_sys_tables
dbname->          WHERE relname = 'pg_attribute';

-[ RECORD 1 ]-------+-------------
relid               | 1249
schemaname          | pg_catalog
relname             | pg_attribute
seq_scan            | 167081
seq_tup_read        | 484738
idx_scan            | 1506941
idx_tup_fetch       | 4163837
n_tup_ins           | 0
n_tup_upd           | 0
n_tup_del           | 0
n_tup_hot_upd       | 0
n_live_tup          | 0
n_dead_tup          | 0
n_mod_since_analyze | 0
last_vacuum         |
last_autovacuum     |
last_analyze        |
last_autoanalyze    |
vacuum_count        | 0
autovacuum_count    | 0
analyze_count       | 0
autoanalyze_count   | 0

This confirms that autovacuum has never run. But more interesting is that we get a hint why it didn’t run:

PostgreSQL thinks that the number of dead tuples (row versions that could be removed) is 0, so it didn’t even try to remove them.

The statistics collector

A suspicion became certainty after I found the following message in the server logs:

using stale statistics instead of current ones because stats collector is
not responding

The statistics collector process is the PostgreSQL backend process that collects usage statistics.

After each activity, PostgreSQL backends send statistics about their activity. These statistics updates are sent through a UDP socket on localhost; that is created at PostgreSQL startup time. The statistics collector reads from the socket and aggregates the collected statistics.

Closing in on the problem

The statistics collector was running:

918     1 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
947   918 postgres: logger process   
964   918 postgres: checkpointer process   
965   918 postgres: writer process   
966   918 postgres: wal writer process   
967   918 postgres: autovacuum launcher process   
968   918 postgres: stats collector process   
969   918 postgres: bgworker: logical replication launcher

To see what the statistics collector was doing and to spot any problems it had, I traced its execution:

# strace -p 968
strace: Process 968 attached
epoll_pwait(3,

The statistics collector was waiting for messages on the UDP socket, but no messages were coming through!

I had a look at the UPD socket:

# netstat -u -n -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State       PID/Program name    
udp6       0      0 ::1:59517       ::1:59517       ESTABLISHED 918/postmaster

Nothing suspicious so far.

But when I tried the same thing on a machine that didn’t have the problem, I got something different:

# netstat -u -n -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State       PID/Program name    
udp        0      0 127.0.0.1:46031 127.0.0.1:46031 ESTABLISHED 9303/postmaster

It turned out that on all systems that experienced table bloat, the statistics collector socket was created on the IPv6 address for localhost, while all working systems were using the IPv4 address!

But all machines had IPv6 disabled for the loopback interface:

# ifconfig lo
lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt;  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6897  bytes 2372420 (2.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6897  bytes 2372420 (2.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Nailing the bug to the wall

PostgreSQL uses the POSIX function getaddrinfo(3) to resolve localhost.

Since PostgreSQL knows quite well that it is important to have a working statistics collection, it will loop through all the addresses returned by that call, create a UDP socket and test it until it has a socket that works.

So we know that IPv6 must have worked when PostgreSQL was started!

Further analysis revealed that IPv6 was disabled during the boot sequence, but there was a race condition:
Sometimes IPv6 would be disabled before PostgreSQL started, sometimes afterwards. And these latter machines were the ones where the statistics collector stopped working and tables got bloated!

After changing the boot sequence to always disable IPv6 before starting PostgreSQL, the problem was fixed.

Conclusion

This shows (again) how the cause of a computer problem can be in an entirely different place than you would suspect at first glance.

It also shows why a good database administrator needs to know the operating system well.

There are couple of different ways to implement database encryption – commonly on operating system, filesystem, file or column level, leaving out transport level encryption which is supported since 15 years. Each of those approaches counters a different threat model, and one can easily imagine that in the case of databases, where the systems were originally not designed with encryption in mind, it is not exactly easy to first agree on a certain way of doing things – and then there would be for sure a lot of technical pitfalls on the way. But today, after a lot of work in co-operation with a client, we’re really glad to announce that our PostgreSQL instance-level encryption patch mentioned some months ago is now ready for download and use!

How does it work?

The high-level idea behind the patch is to store all the files making up a PostgreSQL cluster on disk in encrypted format (data-at-rest encryption) and then decrypt blocks as they are read from disk into shared buffers. As soon as a block is written out to the disk from shared buffers again, it will be encrypted automatically. This requires firstly that the database is initialized (initdb) with encryption in mind and secondly that during startup, the server needs access to the encryption-key, which can be provided in two ways – through an environment variable or through a new “pgcrypto.keysetup_command” parameter. Note that the entire instance is encrypted, so in some sense this implementation is comparable with filesystem or partition level encryption.

Sample setup

1. Build the Postgres code with the patch applied, install also “contrib”.

2. Initialize the cluster by setting the encryption key and then calling initdb


read -sp "Postgres passphrase: " PGENCRYPTIONKEY

export PGENCRYPTIONKEY=$PGENCRYPTIONKEY

initdb –data-encryption pgcrypto --data-checksums -D cryptotest

3. Starting the server

3.1. Set the PGENCRYPTIONKEY environment variable before starting the server (in our case it’s already there but not so after opening a new console)


export PGENCRYPTIONKEY=topsecret pg_ctl -D cryptotest start

3.2. Optionally, instead of feeding the encryption key from environment variable one could implement a custom and more secure key reading procedure via the new “pgcrypto.keysetup_command” postgresql.conf parameter. This parameter will be executed as a Bash command by the Postgres server process and it needs to return a string in format of “encryptionkey=…” with a key part of 64 hex characters. In this way pg_ctl can be used as normally.

Details

For those more technically interested – encryption algorithm used is 128-bit AES in XTS mode, sometimes called also XTS-AES. It’s a block cipher with a “tweak” for extra security and basically military-grade standard stuff so you shouldn’t worry about the mechanics too much but rather how to keep your encryption-key safe and how to safely feed it to Postgres when the server starts up. The key needs to be provided to the server during every startup and when it doesn’t match (a known value kept encrypted in the controlfile – visible under “Data encryption fingerprint” when calling out pg_controldata – will be decrypted and compared) server will refuse to start.

Encrypted will be more or less everything – heap files (tables, indexes, sequences), xlog (as they also contain data), clog, temporary files being generated during execution of a larger query.

The main encryption/decryption code itself is placed into the existing “pgcrypto” module, extending it with functions like “pgcrypto_encrypt_block” and “pgcrypto_decrypt_block” and adding some other setup functions but also the storage manager and more than a dozen other files had to be changed.

Expectations on performance

Naturally, one agrees to compromise on performance when going for encryption, as there are no free lunches. Here things are a bit fuzzy – one can expect a very considerable performance hit if your workload is IO-oriented (logging etc). On the other hand, on typical server hardware, when the active dataset stays more or less in shared buffers, the performance penalty will be very minor and not noticeable.

To get a better picture I ran 5 different types of “pgbench” workloads (avg. of 3 runs of 10min each, 4 concurrent clients) and the results can be seen below. To conjure up a “typical workload” I’m using here pgbench in  “–skip-updates” mode, giving us basically 1:3 read-write ratio.

Hardware: AWS i2.xlarge (4 vCPUs, 2.5 GHz, Intel Xeon E5-2670v2, 30.5 GiB memory, 1 x 800 GB SSD XFS-formatted)

WorkloadWithout encryptionWith encryptionPenalty
Bulk insert (pgbench init on scale 200, ~2.6GB of data)32s75s134%
Read only from shared_buffers21657 TPS21462 TPS0.9%
Read-write (1:3 ratio) fitting into shared buffers3600 TPS3154 TPS12%
Read-only not fitting into shared buffers19876 TPS12328 TPS38%
Read-write (1:3 ratio) not fitting into shared buffers3418 TPS2685 TPS21%

As we can see, the picture is pretty non-linear here, with performance difference jumping from 1% to 134% for different tests, with an average of 16% for a typical business application and with an average penalty <10% when things fit into shared buffers. This all means that encryption performance is very workload specific and sensitive against the amount of pages moved between shared buffers and disk (cache ratio) as well as pure CPU power for the number crunching, so it’s quite difficult to conclude something final here besides stating that performance decreases for typical use cases is very acceptable in my opinion, with some room for future optimizations (starting to use AES-NI i.e. hardware accelerated encryption instructions supported by all modern CPUs).

All in all, Postgres now has a new and viable encryption option available, and in some environments it could well be the simplest approach for securing your data. As always, we would appreciate your feedback and questions! Download here.