I have been wondering a while about the optimal block size in PostgreSQL. Usually the default value of 8kb has proven to be beneficial for most applications. However, after listening to some talk about MonetDB, I got interested in testing various workloads for different blocksizes – especially with blocksizes larger than PostgreSQL’s current maximum of 32k.
We came up with a prototype patch to support larger blocksizes for PostgreSQL, which can be downloaded from here: PATCH

Very large PostgreSQL blocks revisited

It has turned out to be pretty obvious that for a default workload as created by pgbench, larger blocks are not too beneficial. We did a test with typical block sizes of 2k, 4k, 8k, 18k and 1 MB. In all tests the results have been pretty much the same. The bottom line is: Supersize blocks are obviously never beneficial for the typical case.

Our first results are like that:

Samsung SSD 840, 500 GB            TPS

blocksize=2k                                     147.9
blocksize=4k                                     141.7
blocksize=8k                                     133.9
blocksize=16k                                   127.2
blocksize=1MB                                   42.5

All results were produced on a 4 core Intel i5 CPU with 2.67 Ghz and 16 GB Ram. One Samsung SSD was serving as storage device.

Settings used for pgbench:
• Scale factor: 10
• Concurrent connections: 4
• Transactions: 10.000 / connection
• Fillfactor: 70%
• 128 MB shared buffers
• Default PostgreSQL config

The main interesting thing here is that using supersize blocks will basically destroy performance for a handful of reasons.
NOTE: This is a standard OLTP test.