I have been wondering a while about the optimal block size in PostgreSQL. Usually the default value of 8kb has proven to be beneficial for most applications. However, after listening to some talk about MonetDB, I got interested in testing various workloads for different blocksizes – especially with blocksizes larger than PostgreSQL’s current maximum of 32k.
We came up with a prototype patch to support larger blocksizes for PostgreSQL, which can be downloaded from here: PATCH
Very large PostgreSQL blocks revisited
It has turned out to be pretty obvious that for a default workload as created by pgbench, larger blocks are not too beneficial. We did a test with typical block sizes of 2k, 4k, 8k, 18k and 1 MB. In all tests the results have been pretty much the same. The bottom line is: Supersize blocks are obviously never beneficial for the typical case.
Our first results are like that:
Samsung SSD 840, 500 GB TPS
All results were produced on a 4 core Intel i5 CPU with 2.67 Ghz and 16 GB Ram. One Samsung SSD was serving as storage device.
Settings used for pgbench:
• Scale factor: 10
• Concurrent connections: 4
• Transactions: 10.000 / connection
• Fillfactor: 70%
• 128 MB shared buffers
• Default PostgreSQL config
The main interesting thing here is that using supersize blocks will basically destroy performance for a handful of reasons.
NOTE: This is a standard OLTP test.