Temporary tables are a core feature of SQL, and are commonly used by people around the globe. PostgreSQL provides a nice implementation of temporary tables, which has served me well over the years. An interesting question arises when using a temporary table: What if a temporary table has the same name as a “real” table? What happens? The answer might be quite surprising ...
Let us create two tables which have the same name:
1 2 |
test=# CREATE TABLE x (id int); CREATE TABLE |
One table is a “real” one and the other one is temporary:
1 2 |
test=# CREATE TEMP TABLE x (a int, b int); CREATE TABLE |
The first vital observation is that both tables can actually be created without any problems. PostgreSQL will put those tables into two totally separate namespaces:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
test=# \dt List of relations Schema | Name | Type | Owner -----------+------+-------+------- pg_temp_2 | x | table | hs (1 row) test=# \dt public. List of relations Schema | Name | Type | Owner --------+------+-------+------- public | x | table | hs (1 row) |
The crucial point is that the temporary table has precedence over the persistent table.
When “x” is selected, the temporary table comes first ...
1 2 3 4 5 |
test=# SELECT * FROM x; a | b ---+--- (0 rows) |
… unless the schema is explicitly prefixed:
1 2 3 4 5 |
test=# SELECT * FROM public.x; id ---- (0 rows) |
The recommendation is absolutely clear here: Don't use temporary tables with the same name as persistent tables. All it does is causing trouble and pain.
For further advice on managing tables in PostgreSQL, see our latest blogs in the table tag blog spot.
In case you need any assistance, please feel free to contact us.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.
UPDATED August 2023: Tuning shared_buffers in PostgreSQL is a pretty fundamental thing to do when you set up a high-performance server. In most cases, you add more memory to speed things up considerably. Many tests have shown exactly that. However, there are some corner cases which are not common, and which can cause significant performance issues.
DROP TABLE
One of those corner cases is the massive use of DROP TABLE
. One might argue that DROP TABLE
is a rare thing to do. However, in many applications it is not. DROP TABLE
is used massively by countless applications and can turn out to be a bit of a showstopper.
Why is it an issue? The problem is that during DROP TABLE
all cached blocks are forced out of the cache. The bigger the cache, the bigger the effort.
The following test shows what this can actually mean in real life.
1 2 3 4 5 6 |
SET synchronous_commit TO off; BEGIN; CREATE TABLE x(id int); INSERT INTO x VALUES (1); DROP TABLE x; COMMIT; |
To make sure that the entire test does not start to be disk-bound, flushing requirements are relaxed. Then a basic table is created and dropped again.
shared_buffers
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#/bin/sh DB=test for x in '8 MB' '32 MB' '128 MB' '1 GB' '8 GB' do pg_ctl -D /tmp/db -l /dev/null -o '--shared_buffers='$x'' start sleep 1 echo tps for $x psql -c 'SHOW shared_buffers' $DB pgbench --file=run.sql -j 1 -c 1 -T 10 $DB 2> /dev/null pg_ctl -D /tmp/db stop sleep 1 done |
The test starts a database instance and runs the test shown before for 10 seconds.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[hs@laura benchmark]$ ./x.sh | grep tps tps for 8 MB tps = 1390.858135 (including connections establishing) tps = 1391.123414 (excluding connections establishing) tps for 32 MB tps = 1341.244741 (including connections establishing) tps = 1341.745728 (excluding connections establishing) tps for 128 MB tps = 1193.329832 (including connections establishing) tps = 1193.796699 (excluding connections establishing) tps for 1 GB tps = 532.483333 (including connections establishing) tps = 532.702853 (excluding connections establishing) tps for 8 GB tps = 104.238781 (including connections establishing) tps = 104.280531 (excluding connections establishing) |
As you can see, the number of transactions per seconds drops dramatically down to around 100 TPS - this is 7.5% of the maximum value reached on this test.
The reason for this horrible drop is that during DROP TABLE
, PostgreSQL will clean out shared_buffers
. The trouble is: The larger the shared_buffers
area actually is, the longer this will take. For most applications this is not a real issue. However, if your application is heavily built on creating and dropping tables for whatever reason, this can be a showstopper and kill performance.
Of course, creating and dropping so many temporary tables should not happen, but from experience we have seen that it actually does.
To understand how that works, see the update about vacuum truncation in this blog on table bloat.
In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.