Needless to say, security is a topic that nobody in the wider IT industry can ignore nowadays, with a constant flow of reports on data breaches of various scales. Most of such cases don’t result from direct attacks against databases though, but more from targeting Application / API / Webserver problems as database servers are usually just not directly exposed to the Internet. And even if they were, PostgreSQL installations at least have a good starting point there, as the default settings are sane enough to only listen to local connections by default, preventing from most embarrassing kind of headlines. But of course PostgreSQL setups can also be compromised in many ways, so here’s a short listing of possible threats and some simple suggestions to mitigate those threats if possible.

Unnecessary PostgreSQL superuser access

This my friends, is the #1 point of danger in my opinion, so an appropriate time to repeat the classics – ”with great power comes great responsibility”. Superuser accounts are of course needed for maintenance so cannot be really disabled…but mostly they are vastly overused, even for simple things like normal schema evolution, which can be (mostly) perfectly managed with standard “owner privileges”. But Ok, about the threats – there’s the obvious: pumping out / dropping all table data. But the most important thing to remember – superusers can execute random commands on the OS level, under PostgreSQL process owner privileges. The easiest way to do that is to use the relatively unknown PROGRAM variation of the COPY command, which is also nicely documented as such. Some other methods to achieve the same that are quite hidden and sneaky:

  • Achieving physical access to the machine with COPY PROGRAM

This attack assumes that the server has been setup for passwordless communication / data transfer over SSH using default settings (which is very common). The trick itself is disturbingly easy – just create a table with a single text field and execute something like that:

krl@postgres=# create table ssh(data text);
CREATE TABLE
krl@postgres=# \copy ssh from program 'cat ~/.ssh/id_rsa.pub'
COPY 1

The rest should be obvious – copy the key to a local RSA key file…and “ssh” to the machine. Given here you know the operating system user under which Postgres is running – most often it will be the default “postgres” or the key itself could include it. From there a malicious use can easily find some security holes on old unpatched systems and possibly advance to “root”, doing additional harm outside of PostgreSQL, meaning possible “game over” situation for the whole business. FYI – “COPY PROGRAM” works with Postgres 9.3 and newer, and sadly there’s no configuration flag to disable it. Functionality is only available to superusers though.

  • “archive_command” abuse

When archiving is enabled, this seemingly innocent parameter can be abused to do nasty things. An example:

ALTER SYSTEM SET archive_command = 'rm -rf /home/postgres';
SELECT pg_reload_conf();
SELECT pg_switch_wal();    -- oopsie
  • “file_fdw” arbitrary program “reads”

From version 10 it’s possible to define “virtual” tables that gather data from an external program’s output, with “legal” use cases encompassing reading compressed files and maybe some Webservice usage…but the command could be abused the same way as with previous example, Postgres just executes the command “as is”.

CREATE EXTENSION file_fdw;
CREATE SERVER csv FOREIGN DATA WRAPPER file_fdw;
CREATE FOREIGN TABLE test(data text) SERVER csv OPTIONS (program 'rm -rf /home/postgres');
SELECT * FROM test; -- #$@&%*!
  • Untrusted PL-languages

When for example PL/Pythonu is installed it also childs play to execute random commands:

CREATE FUNCTION evil_stuff()
 RETURNS void
 LANGUAGE plpythonu
AS $SQL$
import subprocess
subprocess.call(["rm", "-rf", "/var/lib/postgresql/"])
$SQL$;
SELECT evil_stuff();

Mitigation: as limiting superuser actions is very difficult (the provided threat examples, excluding COPY PROGRAM, can be mitigated with careful setup though) so the only real strategy is – applications should be designed so that superuser access is not needed. And this should be very doable for 99% of users as for example fully managed PostgreSQL services, where you never get superuser access, have become quite popular among developers. Basic principle – hand out superuser access only to those who have physical server access also, as one can basically draw an equality sign between the two (see the above section about COPY PROGRAM).

Brute force password guessing

Second place on my personal threat list goes to weak passwords. What makes up for a weak password is probably debatable…but for me they are some shortish (<10 chars) words or names taken from English speaking domain, combined maybe with some number, i.e. anything that is too short and not randomly generated. The problem is that for such password there are quite some “top” lists available for download after some short Googling, like the “Top 1 Million most common passwords” here.

So what can you do with such lists in PostgreSQL context? Well, quite a bit…given that by default Postgres weirdly enough has no protection against brute-force password guessing! And brute force is extremely easy to perform, for example with something like that (disclaimer: you might get into trouble if trying to run that on a database that you don’t control anyways):

sudo apt install -y hydra
curl -s https://raw.githubusercontent.com/danielmiessler/SecLists/master/Passwords/Common-Credentials/10-million-password-list-top-1000000.txt > top-1m-pw.txt
hydra -l postgres -P top-1m-pw.txt postgres://mydb:5432
...
[STATUS] 17389.00 tries/min, 17389 tries in 00:01h, 982643 to do in 00:57h, 16 active
...
# … and maybe after some time, voila!

Couldn’t be easier. On my laptop, connecting to ‘localhost’, the tool managed ~290 connections attempts per second…which is not too much though, being a generic framework. So with some simple (single-threaded only, leaving lots to improve) custom Go code I managed to double it…so in short if you have a really weak password it could only take less than an hour before someones eyes will start glowing with excitement:)

Mitigation: 1) Use long, random passwords. 2) Install and configure the relatively unknown “auth_delay” contrib extension, that will halt the communication for a moment (configurable) in case a login attempt fails, making it slower to guess the passwords, even when opening many connections in parallel. Attention RDS Postgres users – funnily enough the extension is not whitelisted, so better make sure your passwords are good. 3) Actively monitor the server logs – doing full power brute-forcing generates many GB-s of logs per hour with FATAL entries, which should be picked up by a good monitoring setup.

Man in the middle attacks

Out of the box PostgreSQL is not immune to this quite a known security risk, where the main threat is that with some work someone can intercept / proxy all communication and also reverse engineer the password hashes (given default md5 hashes used) when they have access to the network for longer periods.

Mitigation: 1) As a minimum force SSL connections. Just remember that enabling “ssl” in the postgresql.conf doesn’t cut it, it needs to be enforced additionally also in the pg_hba.conf. 2) Phase out “md5” passwords in favor of the newer and more secure “scram-sha-256” – not all drivers sadly support it still as it’s relatively new :/ 3) When highest confidence is needed – employ certificates i.e. use “verify-ca“ or “verify-full” SSL modes when connecting.

Loose backups or logs

That’s a classic mistake. I’ve seen plenty of setups with very tight “online” access…but where at the end of the day Postgres backups or server logs (server logs can possibly also contain plain text passwords!) are placed onto a backup server together with some application server logs where more people have access to them, to for example analyze some other aspects of application behaviour. In encrypted form this would be of course somewhat OK, but just using a different folder and hoping application is a disaster waiting to happen.

Mitigation: 1) Clearly separate database backups / logs from anything else so that only DBAs would have access. 2) Use “at rest” encryption.

Disk space attack

This one might be also a minor one as it assumes a malicious inside job or a an accident by non-aware Postgres user. The fact is that at default settings, all users with login rights, even the normal unprivileged ones, could create both persistent and temporary objects (mostly tables) in the default “public” schema. That is of course not bad “per se” and adds convenience, but can again be abused – to create a table and insert so many rows into it (think generate_series) that server IO is depleted or the disk space runs eventually out and Postgres crashes.

Mitigation: 1) Don’t use “public” schema at all for important applications and create app specific schemas. 2) When already using “public” schema, then for those systems where temporary / auxiliary tables should actually not be needed for normal users, one should strip the CREATE rights from everyone besides maybe the schema owner. In SQL this would be something like:

REVOKE CREATE ON SCHEMA public FROM PUBLIC;
-- then hand out grants for those who really need them
GRANT CREATE ON SCHEMA public TO app_owner_role;

3) For more or less similar reasons also set the “temp_file_limit” parameter to some reasonably low value. The parameter controls how much disk space all temporary files of a query can maximally occupy. This is relevant only for transparent temp files caused by heavy queries though.

Distributed Denial of Service attacks for publicly exposed DBs

As a last threat, as with any services accessible over the Internet, also databases could fall under a DDoS attack if you have managed to annoy some competing companies or interest groups enough. And here actually there’s nothing that can be done on the Postgres side – even when you’ll configure the Host-based access config (pg_hba.conf) to “reject” all connection attempts from all unwanted IP ranges, it would be too much for the server to handle. This kind of filtering needs to happen one level lower in the stack, but luckily there are lot’s of software products / services acting as gateways to your DB (just Google “cloud based ddos protection”) filtering packets coming from know botnets and such. Some IaaS providers on who’s hardware your VMs are possibly running also have some basic protection often built-in. With DDoS the only consolidation is that you’re not losing any data but “just” revenue / customers.

Mitigation: 1) Try to avoid direct public access to your database. 2) Invest in some anti-DDoS service.

So that was my security checklist. Hope it made you think and maybe also go and check / change some server settings 🙂 Thanks for reading and please let me know in the comments section if I missed something important.