Tag: pgbench
pg_dump compression specifications in PostgreSQL 16
What is pg_dump compression? pg_dump is a PostgreSQL utility for backing up a local or remote PostgreSQL database. It creates a logical backup file that contains either plain SQL commands for recreating the database, or a binary file that can be restored with the pg_restore utility. The binary backup file can be used to restore […]
PostgreSQL: The power of a SINGLE missing index
Index missing? When an index is missing, good performance won’t be kissing a PostgreSQL user looking for efficiency but instead feels like a legacy. To satisfy a DBA’s desire and thirst, let us load some data first. pgbench is the tool of the day but the next listing will explain that anyway: Loading millions of […]
Query performance in PostgreSQL 13 RC1
By Kaarel Moppel – If you read this blog post the new PostgreSQL version will be probably already officially released to the public for wider usage…but seems some eager DBA already installed the last week’s Release Candidate 1 and took it for a spin 😉 The “spin” though takes 3 days to run for my […]
The mysterious “backend_flush_after” configuration setting
By Kaarel Moppel – The “backend_flush_after” PostgreSQL server configuration parameter was introduced some time ago, in version 9.6. It has been flying under the radar, and had not caught my attention previously. However, I recently was pasted (not being on Twitter) a tweet from one of the Postgres core developers Andres Freund. The tweet basically […]
A formula to calculate “pgbench” scaling factor for target DB size
Pgbench is a very well-known and handy built-in tool that Postgres DBAs can use for quick performance benchmarking. Its main functionality/flow is super simple, but it also has some great optional features, like running custom scripts and specifying different probabilities for them. One can also use bash commands to fill query variables for example. But […]