serial is a popular pseudo data type in PostgreSQL which is often used to generate auto-increment columns. However, this can lead to issues which are often underestimated. So what is the problem? Some of you might have already seen the following error message in real life:

bigint=# INSERT INTO t_overflow (dummy)
ERROR: nextval: reached maximum value of sequence "t_overflow_id_seq" (2147483647)

What happened here is that the underlying integer column consumed too many values, and PostgreSQL is not able to increment them anymore. But how did we get there in the first place?

Sequences: Running out of values produces the error

Let’s create a table using the “serial” data type:

bigint=# CREATE TABLE t_overflow (id serial, dummy text);

What happens here is that PostgreSQL will create an int4 column which is limited to 32 bits:

bigint=# \d t_overflow
Table "public.t_overflow"
Column | Type    | Collation | Nullable | Default
id     | integer |           | not null | nextval('t_overflow_id_seq'::regclass)
dummy  | text    |           |          |

The newly created sequence serves as a default value and will increment as soon as we insert into the “id” column as shown in the next example:

bigint=# INSERT INTO t_overflow (dummy) VALUES ('abc'), ('cde') RETURNING *;
id | dummy
1  | abc
2  | cde
(2 rows)


The “serial” column has produced two ids for us: 1 and 2. Therefore the current value of the sequence in this session is 2:

bigint=# SELECT currval('t_overflow_id_seq');
(1 row)

Note that “currval” does not produce the more recent value issued by the sequence – it issues the most recent value issued by the sequence IN THIS session.

Usually a sequence can last for quite a while. However, if your system is really busy you need to be aware of the fact that a sequence is not unlimited. It ends after around 2 billion values. We can explicitly set a value using “setval” to simulate this behavior:

bigint=# SELECT setval('t_overflow_id_seq', 2147483646);
(1 row)

There is still one value left …

bigint=# INSERT INTO t_overflow (dummy) VALUES ('efg') RETURNING *;
id          | dummy
2147483647  | efg
(1 row)


But finally an error will be issued:

bigint=# INSERT INTO t_overflow (dummy) VALUES ('ghi') RETURNING *;
ERROR: nextval: reached maximum value of sequence "t_overflow_id_seq" (2147483647)

Why “reached maximum value” matters

Why is this critical? In many cases this means that we have produced a table with 2 billion entries. This is not an issue for PostgreSQL – but it is an issue if this table is using the id as a primary key because in order to change the int4 column to int8 we have to run ALTER TABLE which will produce a table lock.

Unfortunately a change from int4 to int8 requires a complete rewrite of the table which can lead to major locking issues and effectively a downed application. As rewriting such a large table takes time this can be a non-trivial issue from an availability point of view.

The solution to the problem is “serial8” which automatically produces a 64-bit integer column. In reality you will never run out of 64 bit values. It therefore makes sense to think ahead and not use int4 columns for large tables – it can turn into a time bomb.

Finally …

Sequences are an important feature of every commonly used database system. If you want to learn more about sequences and gaps in sequences in general, consider checking out our blog about this topic.

To read more about getting ALTER TABLE right, see my recent blog