CYBERTEC PostgreSQL Logo

PostgreSQL: You might need to increase max_locks_per_transaction

01.2020 / Category: / Tags: | |

out of shared memory”: Some of you might have seen that error message in PostgreSQL already. But what does it really mean, and how can you prevent it? The problem is actually not as obscure as it might seem at first glance. max_locks_per_transaction is the critical configuration parameter you need to use to avoid trouble.

“out of shared memory”: When it happens

Most of the shared memory used by PostgreSQL is of a fixed size. This is true for the I/O cache (shared buffers) and for many other components as well. One of those components has to do with locking. If you touch a table inside a transaction, PostgreSQL has to track your activity to ensure that a concurrent transaction cannot drop the table you are about to touch. Tracking activity is important because you want to make sure that a DROP TABLE (or some other DDL) has to wait until all reading transactions have terminated. The trouble is, you have to store information about tracked activity somewhere-- and this point is exactly what you have to understand.

Let us run a simple script:

What this script does is to start a transaction and to generate 20.000 CREATE TABLE statements. It simply generates SQL which is then automatically executed (gexec treats the result of the previous SQL statement as input). 

Let us see what the SELECT statement produced ...

And now let us see what PostgreSQL does:

After a few thousand tables, PostgreSQL will error out: “out of shared memory”. What you can see is that we created all those tables in a single transaction. PostgreSQL had to lock them and eventually ran out of memory. Remember: The database is using a fixed-size shared memory field to store those locks.

The logical question is: What is the size of this memory field? Two parameters come into play:

The number of locks we can keep in shared memory is max_connections x max_locks_per_transaction.

Keep in mind that row level locks are NOT relevant here. You can easily do a …

… without running out of memory because row locks are stored on disk and not in RAM. Therefore the number of tables is relevant – not the number of rows.

Inspecting pg_locks

How can you figure out what is currently going on? To demonstrate what you can do, I have prepared a small example:

First of all, you can create a simple table.
As you might know, in PostgreSQL names are not relevant at all. Internally, only numbers count. To fetch the object ID of a simple table, try the following statement:

In my example, the object id is 232787. Let us figure out where this number pops up:

Since we are reading from the table, you can see that PostgreSQL has to keep an ACCESS SHARE LOCK which only ensures that the table cannot be dropped or modified (= DDL) in a way that harms concurrent SELECT statements.
The more tables a transaction touches, the more entries pg_locks will have. In case of heavy concurrency, multiple entries can become a problem.

PostgreSQL partitioning and how it relates to “out of shared memory”

If you are running a typical application, out of memory errors are basically rare because the overall number of relevant locks is usually quite low. However, if you are heavily relying on excessive partitioning, life is different. In PostgreSQL, a partition is basically a normal table-- and it is treated as such. Therefore, locking can become an issue.

Let us take a look at the following example:

First of all, a parent table is created. Then, 1000 partitions are added. For the sake of simplicity, each partition is only allowed to hold exactly one row-- but let’s not worry about that for now. Following that, a simple SELECT statement is executed—such a statement is guaranteed to read all partitions.

The following listing shows which SQL the script has generated to create partitions:

After running the

statement, the important observation is now:

When to change max_locks_per_transaction in regard to partitioning

PostgreSQL already needs more than 1000 locks to do this. Partitioning will therefore increase the usage of this shared memory field and make “out of memory” more likely. If you are using partitioning HEAVILY, it can make sense to change max_locks_per_transaction.

Finally …

In case you are interested in Data Science and Machine Learning, you can check out Kevin Speyer’s post on “Reinforcement Learning” which can be found here.

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

4 responses to “PostgreSQL: You might need to increase max_locks_per_transaction”

  1. It should be also noted that if the query touches only a few partitions, PostgreSQL locks these partitions only.


    commit; begin;

    # SELECT count(*) FROM t_part where id=11;
    count
    -------
    0
    (1 row)

    # SELECT count(*) FROM pg_locks WHERE mode = 'AccessShareLock';
    count
    -------
    3
    (1 row)

    # SELECT count(*) FROM t_part where id=12;
    count
    -------
    0
    (1 row)

    # SELECT count(*) FROM pg_locks WHERE mode = 'AccessShareLock';
    count
    -------
    4
    (1 row)

    # SELECT count(*) FROM t_part where id>=950;
    count
    -------
    0
    (1 row)

    # SELECT count(*) FROM pg_locks WHERE mode = 'AccessShareLock';
    count
    -------
    55
    (1 row)

  2. We have this problem in our database. High lock, cpu and DB crash happens when we add more partitions for upcoming months. Each new monthly/weekly partition has sub-partitions(exactly 11 sub-partitions on each partition) . Recently after adding partitions for next 3 months with sub-partitions (Older partitions didn't have subpartitions. Sub-partitions are added for new partitions only).

    SELECT queries on this causing high CPU consumption which saturates the CPU and slows down the query which leads to load pile-up. Side effect of this, we see too many locks, 10K times than normal when load is pile up.

    Im planning to do a perf test to find a solution to improve the performance of the queries against this table. I have tried increasing the parameter "max_locks_per_transaction" few months back for a problem in different table in different DB to solve a similar CPU/slowness issues, but increasing this parameter didn't help, so we had to detach very old partitions which reduce the number of partitions in the table and CPU consumption came down.

    Still I like to evaluate this parameter once again in a perf environment. Should I consider any other parameter to tune or is any other solution available to solve my problem?

    Thanks in advance. It will be great if you could help on my problem.

    Regards, Boopathi L

    • The system can also run out of shared memory in parallel queries.

      It is impossible to diagnose your problem with the information you gave; you might want to buy some consulting and have someone look at your problem in depth.

      Seems like the underlying problem is the long query time, perhaps you can start working on that.

      • Thanks laurenz, for the response.

        There was no long running transactions in our case. Our problem is high concurrent workload on the table. When more partitions to be locked for every query/transaction, CPU increases and when CPU saturates, the queries gets slowed down and thats leading to pile up in the load.

        What I like to sort out here is, how to reduce the CPU consumed by locks when a query execution has to lock many partitions. Right now we dont have the sub-partition's partition key column in the WHERE Clause of the query. Due to this, every execution has to take lock on multiple partitions.

        We can solve this by having the sub-partition's partition key columns in the WHERE clause. But we can't do that for some reasons, so we need solve this by other way.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram