CYBERTEC Logo

The goal of the PostgreSQL optimizer is to provide you with a plan, which executes as fast as possible and returns all the data as rapidly as possible. In short: The game is about overall execution time of queries.

However, there are cases in which you are not interested in receiving the entire result as fast as possible. Just consider the following example: Imagine going through a catalog. Do you really want to read all 649 pages? Do you really want to go through 10.000 products? Not really. You open a catalog, you read the first handful of products and this was it.

So, in many cases it is not about the speed of the entire dataset but about the performance of the first couple of records.

Here is an example:

We generate a data set consisting of 10 million rows. Make sure that the data is not stored in the table sorted. Otherwise this example will not work (due to high correlation estimates).

Then we create a simple index:

Let us run a simple query now:

The point here is that PostgreSQL will sort the data. Most likely this is going to be the fastest plan here. But, if we sort the data, it actually means that the first rows of data will only be delivered to the client as soon as the entire thing has been sorted.

Going back to our example involving the catalog it would mean: We got to read and prepare the entire catalog to read just a handful of rows at the beginning. Response times will be slow and, more importantly, we would not read the entire stuff anyway.

cursor_tuple_fraction in action

In PostgreSQL a cursor can be used to fetch large amounts of data. Cursors are frequently used to do batch processing. However, there is one more important thing to know about a cursor. By default a cursor will optimize for the first 10% of data. This behavior can be changed even by setting cursor_tuple_fraction.

Here is an example showing how things can be changed:

In this case we have told the cursor to expect only a very small fraction of the data to be fetched.

We can declare the cursor now:

Fetching the row will be supersonic fast:

Instead of sorting all the data at once PostgreSQL will return the data extremely fast. Sure, looping through the entire cursor will be slower, but - there is no need to sort all the data (saving on disk I/O) and the result will be delivered without an initial delay, which can be crucial in many cases.

When cleaning up some old paperwork this weekend I stumbled over a very old tutorial on regular expressions. In fact, I received this little handout during a UNIX course I attended voluntarily during my first year at university. It seems that those two days really changed my life - the price tag: 100 Austrian Shillings which translates to something like 7 Euros in today's money.

While looking over this old thing, I noticed a nice example showing how to test regular expression support in grep. Over the years I had almost forgotten this little test. Here is the idea: There is no single, unique way to transliterate the name of Libya's former dictator. According to this example there are around 30 ways to do it:

Of course I couldn't resist trying it in PostgreSQL to see if things work just like 18 years ago.

Regular Expressions in PostgreSQL

In PostgreSQL there are a handful of operators to work with regular expressions:

            ~          Matches regular expression, case-sensitive

            ~*        Matches regular expression, case-insensitive

            !~        Does not match regular expression, case-sensitive

            !~*      Does not match regular expression, case-insensitive

In our case, we have a case-sensitive regular expression, so that ~ operator should work. To make sure we get the right result, we will add one more row to the data set:

Let's give it a try now:

WOW 🙂 Things work just like 20 years ago - without any changes, without a single problem. I guess that can be called a “safe investment”. Coding which still works after 20+ years without any changes can be considered good code!

Read more about coding topics in this blog about case-insensitive pattern matching by Laurenz Albe.


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

Referring to my previous blog post about the amount of xlog written by PostgreSQL I wanted to clarify what I meant when talking about bypassing the PostgreSQL transaction log.

Normal WAL / xlog writes

Whenever data is changed inside PostgreSQL the change must be written to the xlog before it is written to the underlying table. The reason for that is simple: Imagine you are doing a large INSERT, but the power goes out while you are writing the data to the table. The result would be an uncomplete record somewhere in the middle of the table. Index entries might be missing as well. In short: There would be a serious risk of corruption.

To avoid that, PostgreSQL writes all changes to the xlog to make sure that a table / index / etc. can always be repaired based on the xlog.

Optimizations

However, it is not always necessary to write to the xlog.

Imagine the following scenario:

In this case the transaction will not be seen by others until we commit the thing. We don't have to worry about concurrency in this case. If we commit we can take the COMPLETE new data file - or, we simply throw the freshly created data file away in case the transaction fails. Under any circumstances: There is no situation, which would require the entire content of the table being written to the xlog. This kind of optimization can speed up things dramatically - especially in case of very large transactions.

However, there are more cases in which PostgreSQL can skip the transaction log. Consider this one:

In this case the TRUNCATE does the trick. It locks the table to make sure that nobody else can modify it and as soon as the first new row comes in, PostgreSQL creates a new data file (= new relfilenode). At the end of the transaction we got two choices then: If we can commit safely, we take the COMPLETE new data file. In case of a ROLLBACK we can take the complete old data file.

Of course, this can only be done if you are not using streaming replication (wal_level = minimal).

However, if you got a single-node system bypassing the transaction log is a pretty neat optimization and can speed things up considerably.

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram