CYBERTEC PostgreSQL Logo

We are thrilled to announce that the FINAL version of pgwatch2 v1.9 is now ready for your production environment! pgwatch provides a secure, open-source, flexible, self-contained PostgreSQL metrics monitoring/dashboarding solution. pgwatch2 v1.9 supports monitoring PG versions 9.0 to 14 out of the box. Upgrade, or install pgwatch today to see how fantastic it really is. Download it from Github.

pgwatch2 v1.9 - Feature Highlights

pgwatch supports monitoring PostgreSQL versions 9.0 to 14 out of the box

New Features

It has been a long time since our previous release. And even though pgwatch2 v1.9 is not considered a major upgrade, there are a lot of impressive changes.

First of all, we have a new maintainer for pgwatch2. That's me, Pavlo Golub. I hope Kaarel Moppel, the author of pgwatch2, will still work on the project - but now without the extra burden of support and maintenance.

We've used the fresh Go v1.17 for all binaries and Docker images.

We now have full PostgreSQL v14 support and support for the latest Grafana v8 visualization solution. For those who still use the previous versions of Grafana, we also added support for v7 and v6 dashboards.

To improve our CI/CD experience and automate some tasks, we've introduced three GitHub Actions in a series of planned workflows:

CodeQL analysis stands guard over code security. It will thoroughly investigate each pull request and each push.

It's not a surprise that many issues are created for open-source projects which never get a follow-up contribution from the topic starter. We have enabled a Close Stale Issues and PRs workflow to keep up with the continuous storm of new issues and PRs. It will mark issues and PRs with no activity as stale, and eventually close them.

We added many more cool features, like new gatherer options, e.g. --try-create-listed-exts-if-missing. Or like new metrics for monitoring "wait_events".

Performance

pgwatch2 v1.9 - Additional Features

You'll find many other new features and improvements in pgwatch2 v1.9. Some may be more important to your use case than those highlighted above. For the latest documentation see here.ย 

pgwatch2 v.19 - What's Changed

New Contributors

Full Changelog:ย v1.8.5...v1.9.0

See also our new blog about using pgwatch v1.9 with Google Cloud PostgreSQL!

MobilityDB is on the move

Since my last article about MobilityDB, the overall project has further developed and improved.
This is a very good reason to invest some time and showcase MobilityDBโ€™s rich feature stack by analyzing historical flight data from OpenSky-Network, a non-profit organisation which has been collecting air traffic surveillance data since 2013.

From data preparation and data visualization to analysis โ€“ this article covers all the steps needed to quick-start analyzing spatio-temporal data with PostGIS and MobilityDB together.
For a basic introduction to MobilityDB, please check out previous MobilityDB post first.

The article is divided into four sections:

Software requirements

First, you need a current PostgreSQL database, equipped with PostGIS and MobilityDB. I recommend using the latest releases available for your OS of choice, even though MobilityDB works with older releases of PostgreSQL and PostGIS, too. Alternatively, use a docker container from https://registry.hub.docker.com/r/codewit/mobilitydb for those who donโ€™t want to build MobilityDB from scratch.
Second, you'll need a tool to copy our raw flight data served as huge csv files to our database. For this kind of task, I typically use ogr2ogr, a command-line tool shipped with gdal.
Lastly, to visualize our results graphically, weโ€™ll utilize our good old friend โ€œQuantum GISโ€, a feature-rich GIS-client, which is available for various operating systems.

Here's a quick summary of my setup:

Data allocation and preparation

The foundation of our analysis is historical flight data, offered by OpenSky-Network free of charge for non-commercial usage. OpenSky offers snapshots of the previous Monday's complete state vector data for the last 6 months. These data sets cover information about time (update interval is one second), icao24, lat/lon, velocity, heading, vertrate, callsign, onground, alert/spi, squawk, baro/geoaltitude, lastposupdate and lastcontact. A detailed description can be found at here.

Enough theory; letโ€™s start by downloading historical datasets for our analysis. The foundation for this post are 24 csv files covering flight data for the day 2022-02-28, which can be downloaded from here.

Weโ€™ll continue by creating a new PostgreSQL database, with extensions for PostGIS and MobilityDB enabled. The database will store our state vectors.

From OpenSkyโ€™s dataset descriptions, we inherit a data structure for a staging table, which stores unaltered raw state vectors.

From a directory containing the uncompressed csv files, state vectors can now be imported to our flightanalysis database as follows:

for f in ls *.csv; do
ogr2ogr -f PostgreSQL PG:"user=postgres dbname=flightanalysis" $f -oo AUTODETECT_TYPE=YES -nln tvectors --config PG_USE_COPY YES
done

How many vectors did we import?

flightanalysis=# select count(*) from tvectors;
count
--------------
52261548
(1 row)

To analyse our vectors with MobilityDB, we must now turn our native position data into trajectories.
MobilityDB offers various data types to model trajectories, such as tgeompoint, which represents a temporal geometry point type.

To track individual flights from airplanes, trajectories must be generated by selecting vectors by icao24 and callsign ordered by time.

Initially, timestamps (column time) are represented as a unix timestamp in our staging table. For ease of use, weโ€™ll translate unix timestamps to timestamps with time zones first.

Finally, we can create our trajectories by aggregating locations by icao24 and callsign ordered by time.

To visualize our aggregated vectors in QGIS, someone must extract the vectorsโ€™ raw geometries from tgeompoint, as this data type is not supported natively out of the box from QGIS.
To create a geometrically simplified version for better performance, weโ€™ll utilize st_simplify on top of trajectory. Itโ€™s worth mentioning that simplification methods from MobilityDB also exist, which simplify the whole trajectory and not only its geometry.

From the images below, you can see the vast number of vectors these freely available datasets contain just for one single day.

You can see in the image that the data is noisy, and must be further cleaned and filtered. Gaps in coverage and recording lead to trajectories spanning the world - and subsequently lead to wrong and misleading results. Nevertheless, weโ€™ll skip this extensive cleansing step today (but might cover this as a separate blog post) and move on with our analysis focusing on a rather small area.

Figure 1 Flight vectors world, 2022-02-28
Figure 1 Flight vectors world, 2022-02-28

Analysis

Letโ€™s investigate the trajectories and play through some interesting scenarios.

Number of distinct airframes (identified by icao24)

Average flight duration

Flight vectors intersecting with Iceland between [2022-02-28 00:00:00+00, 2022-02-28 03:00:00+00]

For this kind of analysis, download and import country borders from Natural Earth first.

Figure 2 Vectors intersecting with Iceland: MobilityDB Blog
Figure 2: Vectors intersecting with Iceland

Duration of flyover for flight vectors intersecting with Iceland between [2022-02-28 00:00:00+00, 2022-02-28 03:00:00+00]

Figure 3 Clipped vectors intersecting with Iceland: Mobility DB
Figure 3 Clipped vectors intersecting with Iceland
Figure 4 Clipped vector intersecting with Iceland, icao24 '4cc2c5' callsign 'ICE1046'
Figure 4: Clipped vector intersecting with Iceland, icao24 "4cc2c5" callsign "ICE1046"

Border crossing for individual airframe and callsign while intersecting Iceland between [2022-02-28 00:00:00+00, [2022-02-28 03:00:00+00]

Note that in this case, the plane did not leave the country during this trip.

Figure 5 Border crossing, icao24 '4cc2c5' callsign 'ICE1046' - MobilityDB Blog
Figure 5: Border crossing, icao24 "4cc2c5" callsign "ICE1046"

Outlook

MobilityDBโ€™s rich feature stack drastically simplifies analysis on spatio-temporal data. If I may be allowed to imagine further uses for it, various businesses come to my mind where it could be applied in a smart and efficient manner. Toll systems, surveillance, logistics - just to name a few. Finally, its upcoming release will pave the way for new usages, from research and test utilization to production scenarios.

I hope you enjoyed this session. Learn more about using PostGIS by checking out my blog on spatial datasets based on the OpenStreetMap service

In case you want to learn more about PostGIS and Mobility, just get in touch with the folks from CYBERTEC. Stay tuned!

Spot a missing PostgreSQL index

Missing indexes are a key ingredient if you are looking for a perfect recipe to ruin performance in the most efficient way possible. ๐Ÿ™‚  However, if you want to ensure that your database performs well and if you are generally not in favor of user complaints โ€“ better watch out for missing indexes and make sure that all relevant tables are properly taken care of. One PostgreSQL index can make all the difference in performance.

To help you understand, I have compiled this little guide about how to locate missing indexes, what you can do to fix them, and how to achieve good database performance.

Setting up a test database

In order to demonstrate how to find missing indexes, I have to first create a test database. One way to do it is to use pgbench:

What happens here is that pgbench just provided us with a little sample database which contains 4 tables.

The largest one is pgbench_accounts:

This database is perfectly indexed by default, so we have to drop some indexes in order to find something we can fix later:

We have simply dropped the primary key: which is, internally, nothing other than a unique index which does not allow NULL entries.

Running a demonstration benchmark

Before we start running our benchmark to see how bad performance really gets, you need to make sure that the most important tool to handle performance problems is installed and active: pg_stat_statements. Without pg_stat_statements, tracking down performance problems is unnecessarily hard.

Therefore consider performing the next steps to install pg_stat_statements:

Once this is done, we are ready to run our benchmark. Let's see what happens:

Despite opening 10 connections (-c 10) and proving pgbench with 10 threads (-j 10) we managed to run 4 - yes, 4 - transactions per second. One might argue that hardware is the problem, but it's definitely not:

This is a modern, 8 core machine. Even if the clockspeed were 10 times as high, we would have topped out at 40 transactions per second. That's still way below what you would expect.

pg_stat_user_tables: An important monitoring view for your PostgreSQL indexes

The first clue that indexes might be missing can be found in pg_stat_user_tables. The following table contains the relevant columns:

What we see here is the name of the table (relname) including the schemaname. Then we can see how often our table has been read sequentially (seq_scan) and how often an index has been used (idx_scan). Finally, there is the most relevant information: seq_tup_read. So what does that mean? It actually tells us how many rows the system had to process to satisfy all those sequential scans. This number is really really important and I cannot stress it enough: If โ€œa lotโ€ is read โ€œreally oftenโ€ it will lead to an insane entry in the seq_tup_read column. That also means we have to process an enormous number of rows to read a table sequentially again and again.

Now, let's run a really important query:

This one is true magic. It returns those tables which have been hit by sequential scans the most and tells us how many rows a sequential scan has hit on average. In the case of our top query, a sequential scan has read 5 million rows on average, and indexes were not used at all. This gives us a clear indicator that something is wrong with this table. If you happen to know the application, a simple d will uncover the most obvious problems. However, let us dig deeper and confirm our suspicion:

pg_stat_statements: Finding slow queries

As stated before, pg_stat_statements is really the gold standard when it comes to finding slow queries. Usually, those tables which show up in pg_stat_user_tables will also rank high in some of the worst queries shown in pg_stat_statements.

The following query can uncover the truth:

Wow, the top query has an average execution time of 1.721 seconds! That is a lot. If we examine the query, we see that there is only a simple WHERE clause which filters on โ€œaidโ€. If we take a look at the table, we discover that there is no index on โ€œaidโ€ - which is fatal from a performance point of view.

Further examination of the second query will uncover precisely the same problem.

Improve your PostgreSQL indexing and benchmarking

Let's deploy the index and reset pg_stat_statements as well as the normal system statistics created by PostgreSQL:

Once the missing indexes have been deployed, we can run the test again and see what happens:

What an improvement. The database speed has gone up 3000 times. No โ€œbetter hardwareโ€ in the world could provide us with this type of improvement. The takeaway here is that a SINGLE missing PostgreSQL index in a relevant place can ruin the entire database and keep the entire system busy without yielding useful performance.

What is really important to remember is the way we have approached the problem. pg_stat_user_tables is an excellent indicator to help you figure out where to look for problems. Then you can inspect pg_stat_statements and look for the worst queries. Sorting by total_exec_time DESC is the key.

Finally...

If you want to learn more about PostgreSQL and database performance in general, I can highly recommend Laurenz Albeโ€™s post about โ€œCount(*) made fastโ€ or his post entitled Query Parameter Data Types and Performance.


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

Understanding Multiranges

Range types have been around in PostgreSQL for quite some time and are successfully used by developers to store various kinds of intervals with upper and lower bounds. However, in PostgreSQL 14 a major new feature has been added to the database which makes this feature even more powerful: multiranges. To help you understand multiranges, I have compiled a brief introduction outlining the basic concepts and most important features.

Getting started with range types

Before we dive into multiranges, I want to quickly introduce basic range types so that you can compare what was previously available with what is available now. Here's a simple example:

The important thing to note here is that ranges can be formed on the fly. In this case I've created an int4 range. The ranges starts at 10 (which is included) and ends at 20 (which is not included anymore). Basically, every numeric data type that can be sorted can be turned into a range.

What's important to see is that PostgreSQL will ensure that the range is indeed valid which means that the upper bound has to be higher than the lower bound and so on:

What is the purpose of a range type in the first place?

Well, in addition to validation, PostgreSQL provides a rich set of operators. One of the basic operations you'll often need is a way to check whether a value is inside a range or not. Here's how it works:

You can see from the above that 17 is indeed between 10 and 20 in the table. It's a simple example, but this is also very useful in more complex cases.

It's also possible to use ranges as data types which can be part of a table:

In this case, I've used the tstzrange (โ€œtimestamp with time zone rangeโ€), and I've successfully inserted a value.

The beauty here is that the operators are available are consistent and work the same way for all range types. This makes them relatively easy to use.

Multiranges: Range types on steroids

So what are multiranges? The idea is simple: A multirange is a compilation of ranges. Instead of storing just two values, you can store as many pairs as you want. Let 's take a look at a practical example:

In this case, the multirange has been formed on the fly using two ranges. However, you can stuff in as many value pairs as you want:

PostgreSQL has a really nice features which can be seen when ranges are formed. Consider the following example:

As you can see, multiple ranges are folded together to form a set of non-overlapping parts. This happens automatically and greatly reduces complexity.

A multirange can also be used as a data type and stored inside a table just like any other value:

The way you query this type of column is pretty straightforward and works as follows:

You can simply apply the operator on the column, and you're good to do. In our example, one row is returned which is exactly what we expect.

Handling โ€œinfinityโ€ inside ranges

So far, you've seen that ranges have a beginning and a clearly defined end. However, there is more.

In PostgreSQL, a range is aware of the concept of โ€œinfinityโ€.

Here is an example:

This range starts with -INFINITY and ends at 10 (not included).

Again, you're able to fold single ranges into a bigger block:

The example shows a range than spans all numbers (= - INFINITY all the way to +INFINITY). The following query is proof of this fact:

The result is true: 165.4 is indeed within the range of valid numbers.

Performing basic calculations

That's not all: you can perform basic operations using these ranges. Now just imagine what it would take to do such calculations by hand in some application. It would be slow, error-prone and in general, pretty cumbersome:

You can deduct ranges from each other and form a multirange, which represents two ranges and a gap in between. The next example shows how intersections can be calculated:

There are many more of these kinds of operations you can use to work with ranges. A complete list of all functions and operators is available in the PostgreSQL documentation.

Multiranges: aggregating ranges

The final topic I want to present is how to aggregate ranges into bigger blocks. Often, data is available in a standard form, and you want to do something fancy with it. Let 's create a table:

I've created a table and added 3 rows. Note that data is stored in the traditional way (โ€œfrom / toโ€).

What you can do now is to form ranges on the fly and create a multirange using the range_agg function:

What's important here is that you can do more than create a multirange โ€“ you can also unnest such a data type and convert it into single entries:

Let's think about this query: Basically, aggregation and unnesting allow you to answer questions such as: โ€œHow many continuous periods of activity did we see?โ€ That can be really useful if you are, for example, running data warehouses.

Finally...

In general, multiranges are a really valuable extension on top of what PostgreSQL already has to offer. This new feature in  PostgreSQL v14 allows us to offload many things to the database which would have otherwise been painful to do and painful to implement on your own.

If you want to learn more about PostgreSQL data types, take a look at Laurenz Albe's recent blog on Query Parameter Data Types and Performance, and if you are interested in data type abstraction check out my blog about CREATE DOMAIN.

 

 

 

date_bin and timestamps in PostgreSQL

Date and time are relevant to pretty much every PostgreSQL application. A new function was added to PostgreSQL 14 to solve a problem which has caused challenges for many users out there over the years: How can we map timestamps to time bins? The function is called date_bin.

What people often do is round a timestamp to a full hour. That's commonly done using the date_trunc function. But what if you want to round data so that it fits into a 30-minute or a 15-minute grid?

Let's take a look at an example:

In that case, we want to round for the next half hour: the date_bin function will do the job. The first parameter defines the size of the bins. In our case, it's 30 minutes. The second parameter is the variable we want to round. What's interesting is the third variable. Look at it this way: If we round to a precision of 30 min., what value do we want to start at? Is it 30 min after the full hour, or maybe 35 min? The third value is therefore similar to a baseline where we want to start. In our case, we want to round to 30 min intervals relative to a full hour. Therefore the result is 14:30.

ย The following parameter sets the baseline to 14:31 and the interval should be 20 min. That's why it comes up with a result of 14:31:

ย If we use a slightly higher value, PostgreSQL will put it into the next time bin:

date_bin is super useful, because it gives us a lot of flexibility which can't be achieved using the date_trunc function alone. date_trunc can basically only round to full hours, full days, and so forth.

Finallyโ€ฆ

The date_bin function is adaptable and offers many new features on top of what PostgreSQL already has to offer.

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Facebook or LinkedIn.

CYBERTEC Logo white
Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ยฉ
    2025
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram