Month: September 2013
Sequences – transactional behavior
In this posting I would like to describe some important aspects of PostgreSQL sequences. In our daily work we have noticed that some people are not fully aware of those implications described in this section. Database sequences are database objects from which multiple users can generate unique numbers. Unique means that there are no duplicates […]
Import stock market data into PostgreSQL
Many people I know are doing some private stock market investment. Some of them just want to become rich people – some are saving for their retirements and some are just doing it for fun. What I have noticed is that the internet is full of people who want to import stock market data into […]
Storing network information
Given my experience in my daily work as a PostgreSQL professional I have the impression that most people are still not aware of the fact that PostgreSQL can handle IP addresses – and network information in general – pretty nicely.
Monitoring: Keeping an eye on old transactions
To handle transactions PostgreSQL uses a mechanism called MVCC (Multi Version Concurrency Control). The core idea of this machinery is to allow the storage engine to keep more than just one version of the row.
PostgreSQL 9.3: new functionality
PostgreSQL 9.3 has just been released and we have already received a lot of positive feedback for the new release. Many people are impressed by what has been achieved recently and are already eager to enjoy those new features. As always, the new release brings a great deal of new functionality and many improvements.
PostgreSQL Vim integration: Finally …
Last time I came up with the idea of writing some Vim script for PostgreSQL so that people can edit data fast and easily directly in Vim. The goal is to export a table directly to Vim, modify it and load it back in. This can come in handy when you want to edit a […]
From PostgreSQL directly to Vim
Some (obvious) ideas can struck you when you are just sitting around at the airport or so. This is exactly what happened to me yesterday in Berlin. In some cases it can be quite handy to dump a (reasonably) small database, edit it with vi and use replay it. As a passionate (and fundamentalist) user […]
Speeding up “min” and “max”
Indexes are a perfect tool to find a certain value or some kind of range in a table. It is possible to speed up a query many times by avoiding a sequential scan on a large table. This kind of behavior is widely known and can be observed in any relational database system. What is […]