When I recently stumbled on an article comparing some main properties, and also the read-write performance of two very popular pieces of caching software, often used to speed up applications heavy on database queries, I immediately felt an itch - how would it actually look for PostgreSQL vs Redis vs Memcached on the performance side? Meaning, if one would just skip the cache and hit the database directly.
Table of Contents
Especially after the first look, I wasn’t actually too impressed with the numbers presented for Redis and Memcache. It could have also been due to the fact that the actual test data, extracted from a linked paper, was from 2016 - that’s surely ages in “internet years”. Either way, I was curious to get some kind of a comparison point and quickly threw together a small benchmarking script in Python and let my workstation get to work.
Why Python you might ask, if the original test used Java? Well, Java is just one of my least favorite languages for database projects and I somehow have the impression that Python is very popular both for the Web and also ad-hoc database scripting, so probably a good fit here - might be wrong of course on that.
Some characteristics on my test setup:
1 2 3 4 5 6 7 8 9 10 11 12 |
CREATE UNLOGGED TABLE kv_test(key text, value int); CREATE INDEX ON kv_test (key); -- pseudo-code from the Python script for $ROWS in [1000, 10000, 100000, 1000000]: truncate kv_test generate $ROWS random values for $i := 1 .. $ROWS: insert $key into kv_test ($rand[i], $rand[i]) vacuum analyze the table for $i := 1 .. $ROWS: select * from kv_test where key = $rand[i] |
Represented in a similar way to the original numbers. The calculated time to write key-value pairs is in milliseconds.
NUMBER OF RECORDS | ||||
Database | 1,000 | 10,000 | 100,000 | 1,000,000 |
Redis (v3.0.7) | 34 | 214 | 1,666 | 14,638 |
Memcached (v1.4.14) | 23 | 100 | 276 | 2,813 |
PostgreSQL (v13.3) | 29.6 | 304 | 2,888 | 31,230 |
The calculated time to read key-value pairs (ms).
NUMBER OF RECORDS | ||||
Database | 1,000 | 10,000 | 100,000 | 1,000,000 |
Redis (v3.0.7) | 8 | 6 | 8 | 8 |
Memcached (v1.4.14) | 9 | 14 | 14 | 30 |
PostgreSQL (v13.3) | 0.026 | 0.028 | 0.027 | 0.029 |
In short - the numbers looked surprisingly/suspiciously good for the reading test for Postgres! I can’t imagine how the original test managed to get such high single-digit millisecond results for random key reads across the whole dataset. For my test, I only managed to see 1ms+ worst cases for the biggest rowcount. This data can, by the way, be also looked up on the “pg_stat_statements” snapshots table called “results”.
So sadly, I still cannot possibly get too ecstatic as there was a lot of information missing on the details of exactly how the original tests were performed, so it might have been a bit of an “apples to oranges” situation still in the end, I’m afraid. The average key readout times for Redis/Memcached seemed just way too slow in comparison to Postgres. I suspect they used a remote machine still for the cache, although the paper didn’t mention it and talked about a single Core i7 node.
But about the writing speed of key-value data - well: Postgres doesn’t really compete on higher row counts. 🙁 But this was also more or less expected! Why? A full-blown relational database engine like Postgres goes to great lengths to ensure we cannot insert invalid data violating some constraints, plus the WAL writing (minimal though for unlogged tables but still) and the on-disk-format overhead - internal columns, alignment, some index bloat on page splits, etc. That all amplifies writes a lot! So basically, I think the results still aren’t bad in the end. Only ca 2x slower than Redis for 100k and 1M row counts. The main idea of caches is that they’re only useful if we read much more from them compared to writing/updating anyways!
But be it how it is with the comparison to the other DBs in absolute numbers, it was good to see that the relative stability of Postgres responses to growing datasets was very-very good! And even beating Memcached which deteriorated 2x on 1M rows! This all probably shows that the selected algorithms for PostgreSQL are mathematically sound and well implemented!
But what can be definitely said - Postgres performance is definitely at least good enough for a “caching use case” for some smaller amount of rows. So with your next project it might be worth asking, do you really need another external component?
Or maybe just:
You can find more of my posts. Cheers.
+43 (0) 2622 93022-0
office@cybertec.at
You are currently viewing a placeholder content from Facebook. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information
I'm confused about the "PostgreSQL vs Redis vs Memcached: Read Operation" table. The header indicates figures are in "ms".
In the last column Redis has 8, Memcached has 30, PostgreSQL has 0.029.
Did milliseconds/microseconds get mixed up here? Surely PostgreSQL can't be 3 orders of magnitude faster.
The source code to benchmark PostgreSQL was linked in the article, but not the ones for Redis/Memcached?
That's timing sounds correct for Postgres for getting a single row from a table with that many in it.
For example, this is me running a query (SELECT 1 FROM sale WHERE sale_id = 'xxx';) against a table with 1,292,320 rows in it.
I'll let others remark on the rest.
To get usefully realistic benchmarks here, the database/cache instances need to be on another host, because wire time is very, very significant. For example, within EC2, inter-host latency is typically around 1ms round-trip (but variable, roughly 0.8ms to 1.6ms depending on the AZ of each host), whereas the time it takes Postgres to read a cached primary key lookup should be no more than 0.1ms. Once you take wire time into account, much less of the theoretical performance of memcached or Redis is actually usable.
Ditto what Noah Yetter wrote.
Another kind of test would have been useful: measuring the maximum calls-per-second rate for a small payload, which is the typical use-case for redis and memcached. Redis-bench can provide some test measurements for specific redis actions, too.
Imagine using redis for keeping state for a circuit-breaker, usage counting, usage limiting, rate limiting, etc. and you'll see the advantage of having CPS benchmarks to compare these datastore technologies.
It seems that you are choosing just about the worst way to write data to postgres here. Using better approaches (e.g. using the copy statement: https://www.postgresql.org/docs/9.2/sql-copy.html) coupled with the TimeScaleDB engine (https://www.timescale.com/), I've been able to achieve writes to tables with 50-100 columns at a rate of 100,000-200,000 rows per second. These techniques should easily consume 1,000,000 rows of key-value pairs in under 4 seconds, quite possibly much less. (I don't work either for postgres or timescale, just a big fan of both of these products.)
>Redis (v3.0.7)
Really? Current version is 6.2 https://redis.io/download
If you used async write/read then in Redis you should compare with pipelining.
With simple key-value read/write Memcached would be faster than Redis, since it allows multi-threading.
So the number looks strange.
If you really want to company reading / writing a lot of items from Redis (1M or so) then consider using pipelining and Redis edition with multi-threading support.
Is there a place for a Redis simulator build on... PostgreSQL? With ACID and much more 🙂
Hello. I am not clear on what you put in the above write operation table. On my box I get similar read and write figures:
= Averages for 1000 rows measured from DB side =
ins_mean (ms): 0.00885111910000002, calculated total (s): 0.00885111910000002
sel_mean (ms): 0.007435678299999974, calculated total (s): 0.007435678299999974
Interesting topic. But I feel doing the test with a single-connection client (as I understood) makes it completely useless. I doubt a single client can saturate the bandwidth of any of the 3 test subjects, and the level of saturation may differ wildly between them. For any real server query response times have to be acceptable, but what you really care for is max queries/sec.