PostgreSQL TDE is an Open Source version of PostgreSQL which encrypts data before storing it safely on disk. It’s therefore a more secure and thus a more enterprise-ready variation of the database. That’s why CYBERTEC PostgreSQL Enterprise Edition (PGEE) also relies heavily on encryption.
PostgreSQL performance analysis
People often ask about the performance differences between encrypted and unencrypted PostgreSQL. To answer this question, we have conducted a comprehensive performance analysis to shed some light on this important topic, and to give users a way to compare various PostgreSQL settings and their performance. Our performance tests have been conducted using the following hardware and software components:
CPU: AMD Ryzen 5950X — 16 cores / 32 thread, 72MB cache, support for AES-NI
MEMORY: 64GB RAM — DDR4 2666 ECC dual channel
STORAGE: SSD Samsung 980 PRO — nvme, PCIe v4 ×4
simple lvm volume, XFS filesystem, exclusive use
Software in use
OS: Linux Centos 7.9.2009, kernel 5.10.11-1.el7.elrepo.x86_64
PostgreSQL: postgresql13-13.1-3PGDG.rhel7.x86_64 from PGDG repo
PostgreSQL 13.1 TDE: built locally with the standard compile options used for PGDG packages using base GCC 4.8.5 20150623 (Red Hat 4.8.5-44)
PostgreSQL 13.1 TDE: built locally with the standard compile options used for PGDG packages using software collection devtoolset-9 GCC 9.3.1 20200408 (Red Hat 9.3.1-2) and -march=znver2
We have used a few scripts to run these benchmarks in an automated way. You can download the tools for free from our Github page.
Since results were performed on a solid-state drive without proper TRIM management, the second version of all these analysis is being prepared, with proper TRIM (-o discard mount option) and scales adjusted to better emphasize the impact of a data set on available memory:
- small (order of magnitude smaller than shared buffers) – scale 12
- just below shared buffers limit (90% of shared buffers) – scale 900
- larger than shared buffers but still fits in RAM (120% of shared buffers) – scale 1600
- larger than RAM (120% of RAM) – scale 5000
Other params and tooling will be the same.
Benchmarking methodology for PostgreSQL
All tests were performed locally using a freshly initialized, empty cluster (fresh initdb). Therefore caching effects on the PostgreSQL-side are not relevant.
The following configurations were tested:
- stock — standard PostgreSQL installation using binaries from PGDG packages.
- TDE-4 — PostgreSQL TDE installation with encryption disabled, using binaries compiled with gcc 4.8
- TDE-9 — PostgreSQL TDE installation with encryption disabled, using binaries compiled with gcc 9.3 and optimized for znver2
- TDE-4-e — PostgreSQL TDE installation with encryption enabled, using binaries compiled with gcc 4.8
- TDE-9-e — PostgreSQL TDE installation with encryption enabled, using binaries compiled with gcc 9.3 and optimized for znver2
Note that we did not just compare standard PostgreSQL with PostgreSQL TDE, but also tried various compiler versions. You will find out that the compiler does indeed have a major impact, which might come as a surprise to many.
The following PostgreSQL configuration has been used (postgresql.conf). These variables have been determined using the CYBERTEC configurator:
max_connections = 100 unix_socket_directories = '/var/run/postgresql' port = 15432 shared_preload_libraries = 'pg_stat_statements' shared_buffers = 16GB work_mem = 64MB maintenance_work_mem = 620MB effective_cache_size = 45GB effective_io_concurrency = 100 huge_pages = try track_io_timing = on track_functions = pl wal_level = logical max_wal_senders = 10 synchronous_commit = on checkpoint_timeout = '15 min' checkpoint_completion_target = 0.9 max_wal_size = 1GB min_wal_size = 512MB wal_compression = on wal_buffers = -1 wal_writer_delay = 200ms wal_writer_flush_after = 1MB bgwriter_delay = 200ms bgwriter_lru_maxpages = 100 bgwriter_lru_multiplier = 2.0 bgwriter_flush_after = 0 max_worker_processes = 16 max_parallel_workers_per_gather = 8 max_parallel_maintenance_workers = 8 max_parallel_workers = 16 parallel_leader_participation = on enable_partitionwise_join = on enable_partitionwise_aggregate = on jit = on
Database sizes used during the benchmark were as follows:
- 15: 244–287MB
- 20: 332–404MB
- 25: 400–458MB
- 30: 479–543MB
- 35: 557–629MB
- 50: 793–880MB
- 500: 7849–8028MB
- 5000: 78409–78756MB (exceeds total available memory)
For each scale factor, the following benchmarks were measured (using default options):
- standard read-only benchmark (pgbench built-in)
- standard TPC-like read/write benchmark (pgbench built-in) – before each TPC run, the database was re-indexed to avoid the impact of index usage on results
Concurrency is a major factor if you want to benchmark in a professional environment. The following settings were used:
- 16 — number of physical cores
- 32 — number of hardware threads
PostgreSQL benchmarking results
Read-only benchmarks for PostgreSQL TDE
The following section contains the results of our investigation. Let’s start with a read-only benchmark and a scale factor of 50, which translates to 880 MB of data (small database):
Take a look at the first row (12 connections). Standard PostgreSQL will yield 598.000 read-only transactions per second which is a good number. In general, we have found that AMD CPUs are really efficient these days, and greatly outperform IBM POWER9 and many others. What we do notice is that the compiler makes a real difference. 651k TPS vs. 598k TPS is a whopping 9% improvement which basically comes for “free”. gcc 9 is doing really well here.
The following image shows the basic architecture of PostgreSQL TDE:
What is also important to see is that there is basically no difference between encrypted and non-encrypted runs. The reason for that is simple: PostgreSQL encrypts 8k blocks before sending them to disk and decrypts them when a block is fetched from the operating system. Therefore, there is no difference between the encrypted and unencrypted performance. What you see are, basically, fluctuations, which are totally expected.
Let’s take a look at the same data. In this case, we are using a scale factor 500 which translates to 8 GB of data
The picture is pretty similar to what we observed with the small database. We are still 100% in RAM and therefore there is no real performance difference between PostgreSQL TDE with encryption on or encryption off. What you see are mostly fluctuations:
The final test was conducted with a scale factor of 5000. The important part is that the database has grown so large that there is no way to keep it in RAM anymore. Thus we can observe a major drop in performance:
You have to keep in mind that going to disk is many many times more expensive than doing some pointer arithmetic in shared memory. A lot of latency is added to each individual query which leads to low TPS if there are not enough connections. On SSD’s this effect can be reduced a bit by adding a bit more concurrency. However, adding concurrency only works up to a certain point until the capacity of the disk is exceeded.
Read-write benchmarks for PostgreSQL TDE
After this brief introduction to read-only benchmarks, we can focus our attention on read-write workloads. We use the standard mechanisms provided by pgbench here.
The first thing we notice is that performance is a lot lower. There are two reasons for that:
- A read-write transaction in pgbench has many more commands
- PostgreSQL has to flush (fsync) changes to disk on commit
Let’s take a look at the data:
The price of encryption starts to tell. We see that the performance of the encrypted variants can be significantly lower than that of the non-encrypted setup.
The picture is similar in case the amount of data is increased to a pgbench scale factor of 500:
Now let’s inspect the test using a scale-factor of 5000. What we clearly see is that overall performance dramatically decreases. The reason is of course that a lot of the data has to come from the disk. We therefore see a fair amount of disk wait and latency:
Note that all results have to be taken with a grain of salt. Run-times can always vary slightly. This is especially true if SSD’s are in use. We have seen that over and over again. The performance level we can expect will also highly depend on the PostgreSQL cache performance. Remember: every time a block is sent to disk or read into shared buffers, the encryption/decryption magic has to happen. Thus it can make a lot of sense to run PostgreSQL TDE with higher shared_buffers settings than you would normally do, in order to reduce the impact of security.
If you are aiming for better PostgreSQL database performance, we suggest checking out our consulting services. We can help you to speed up your database considerably. We offer timely delivery, professional handling, and over 20 years of PostgreSQL experience.Contact us >>
Also: check out PostgreSQL Transparent Data Encryption for transparent and cryptographically safe data (cluster) level encryption. TDE helps you to secure your most valuable asset: your data.Check out TDE >>