Maybe some of you might have seen Kaarel Moppel’s talk about pg_crash at pgconfeu 2017 in Warsaw. However, for those who had not the chance to visit this year’s European conference, I decided to come up with a little blog post introducing you to the blessings of our new module: pg_crash
pg_crash and what it is good for
PostgreSQL is known to be a solid database so why would anybody want to crash the server over and over again? Well, pg_crash is a good way to really test your infrastructure. There are various scenarios coming to my mind:
- Check if your applications can handle database crashes
- Check if your high-availability solution can take crashes
- Test PostgreSQL itself
While testing PostgreSQL itself is a noble thing do to, it is not too likely that you actually hit a bug inside PostgreSQL itself. However, is the same true for your infrastructure? The advantage of pg_crash is that you can run it over a prolonged time and potentially perform thousands of tests. Doing so many tests by hand is surely not possible and therefore not all test cases might be covered. pg_crash does the job for you and therefore offers a lot more than just plain old manual testing. pg_crash will really ensure that all your reconnects work and that there are no corner cases missed.
The same applies to testing your PostgreSQL cluster and high-availability infrastructure. We at Cybertec have set up countless clusters. Many years ago we used manual checklists to check all potential error scenarios with the customer one after the other. While this is surely a solid, traditional way to go, using pg_crash offers ways more options. We can just make pg_crash torture the system for a night or a full day and see if we are fine or not hundreds of crashes later. Again, it gives us the chance to provide customers with a lot more quality.
So if you want to try out pg_crash or simply learn more, check out our Github site for more information.
Happy segfault 😉