CYBERTEC Logo

This week I started my preparations for one of my talks in Madrid. The topic is: "Joining 1 million tables". Actually 1 million tables is quite a lot and I am not sure if there is anybody out there who has already tried to do something similar. Basically the idea is to join 1 million tables containing just a single integer column (and no data). My idea was to come up with a join like this:

To give you an impression of the size of the problem I am trying to solve - here is some data on the SQL:

The statement itself is 54 MB in size and is around 2 million lines long.
The first observation is that the standard planner doing exhaustive search clearly cannot do it. The potential number of joins simply sky rockets - even ways below a thousand tables the system won't respond anymore because planning simply takes too long.

So, I tried to focus my tests on genetic query optimization, which proved to be a lot better for this little competition. Joining a couple hundreds of tables with Geqo is actually possible straight away without having to change any parameters at all. This was pretty impressive actually, as in real life you would never attempt to do this kind of operation. Given some runtime estimations it seems that gradually increasing the number of tables would mean that 1 million tables can actually be joined in 1575 days. The thing is just: I don't have time to run this for 5 years as I got to produce my slides for Madrid fairly soon ;).

Geqo is highly configurable, so some parameters can be changed to speed up things by tweaking the generic optimization process and making it a bit more lazy. I managed to cut runtime to roughly one year. To speed this up it seems that I got to inspect the backend process and see what is going on ... stay tuned 😉

More recent blog posts about joins can be found here in our join-related blog spot.

Writing a complex database server like PostgreSQL is not an easy task. Especially memory management is an important task, which needs special attention. Internally PostgreSQL makes use of so called “memory contexts”. The idea of a memory context is to organize memory in groups, which are organized hierarchically. The main advantage is that in case of an error, all relevant memory can be freed at once.

Understanding PostgreSQL memory contexts can be useful to solve a bunch of interesting support cases. Here is an example: Recently we have stumbled across a problem. A database server was constantly running out of memory and was finally killed by the OOM killer over and over again. Backend processes were constantly increasing memory consumption for non-obvious reasons. How can a problem like this be approached?

GDB comes to the rescue

GDB can come to the rescue and solve the riddle of memory consumption nicely. The basic procedure works as follows:

• Create a core dump of the process in question
• Come up with a GDB macro to debug memory
• Run the macro

The first part is actually quite simple. To extract a core dump of a running process we have to find out the process ID first:

In this example the process ID of the process we want to inspect is 1999 (a simple, idle local backend).
Then it is time to create the core file. gcore can do exactly that for you:

The beauty here is that gcore is just a simple shell script calling some gdb magic internally. The result will be a core file we can then make use of:

The harder part

Then comes the harder part: Writing the gdb macro to debug those memory contexts. gdb has a scripting language to handle that. Here is the code:

The last line in the file is the actual call executing the code just written. walk_contexts will go through those memory contexts starting at the TopMemoryContext.
To run the script the following line will be useful. The script can simply be piped into gdb. The result will list information about memory consumption:

The output is actually quite long so I decided to remove a couple of lines. What you see here is how memory contexts are organized and how much memory is in each memory context.
If you happen to see any context which uses insane amounts of memory, it will definitely bring you one step closer to finding the root cause of a memory related problem.

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram