CYBERTEC Logo

We all know and value SQL functions as a handy shortcut. PostgreSQL v14 has introduced a new, better way to write SQL functions. This article will show the advantages of the new syntax.

An example of an SQL function

Let's create a simple example of an SQL function with the “classical” syntax so that we have some material for demonstrations:

You can use the new function like other database functions:

Why SQL functions?

You may ask what good an SQL function is. After all, the main purpose of a database function is to be able to run procedural code inside the database, something you cannot do with SQL. But SQL functions have their use:

Moreover, simple SQL functions can be inlined, that is, the optimizer can replace the function call with the function definition at query planning time. This can make SQL functions singularly efficient:

We can see function inlining if we use EXPLAIN (VERBOSE) on our example function:

Shortcomings of PostgreSQL functions

PostgreSQL functions are great. One of the nice aspects is that you are not restricted to a single programming language. Out of the box, PostgreSQL supports functions written in SQL, C, PL/pgSQL (a clone of Oracle's PL/SQL), Perl, Python and Tcl. But that is not all: in PostgreSQL, you can write a plugin that allows you to use any language of your choice inside the database. To allow that flexibility, the function body of a PostgreSQL function is simply a string constant that the call handler of the procedural language interprets when PostgreSQL executes the function. This has some undesirable side effects:

Lack of dependency tracking

Usually, PostgreSQL tracks dependencies between database objects in the pg_depend and pg_shdepend catalog tables. That way, the database knows the relationships between objects: it will either prevent you from dropping objects on which other objects depend (like a table with a foreign key reference) or drop dependent objects automatically (like dropping a table drops all indexes on the table).

Since the body of a function is just a string constant that PostgreSQL cannot interpret, it won't track dependencies between a function and objects used in the function. A procedural language can provide a validator that checks the function body for syntactic correctness (if check_function_bodies = on). The validator can also test if the objects referenced in the function exist, but it cannot keep you from later dropping an object used by the function.

Let's demonstrate that with our example:

We will fix the problem by creating the extension again. However, it would be better to get an error message when we run DROP EXTENSION without using the CASCADE option.

search_path as a security problem

Since PostgreSQL parses the function body at query execution time, it uses the current setting of search_path to resolve all references to database objects that are not qualified with the schema name. That is not limited to tables and views, but also extends to functions and operators. We can use our example function to demonstrate the problem:

In our example, it is a mere annoyance that we can avoid by using public.unaccent() in the function call. But it can be worse than that, particularly with SECURITY DEFINER functions. Since it is cumbersome to schema-qualify each function and operator, the recommended solution is to force a search_path on the function:

Note that the schemas on the search_path should allow CREATE only to privileged users, so the above is not a good idea on versions older than v15!

An unpleasant downside of setting a search_path is that it prevents the inlining of the SQL function.

The new SQL function syntax in PostgreSQL v14

From PostgreSQL v14 on, the body of SQL functions and procedures need no longer be a string constant. You can now use one of the following forms for the function body:

The first form requires the function body to be an expression. So if you want to perform a query, you have to wrap it in parentheses (turning it into a subquery, which is a valid expression). For example:

The second form allows you to write a function with more than one SQL statement. As it used to be with multi-statement SQL functions, the result of the function will be the result of the final SQL statement. You can also use the second form of the new syntax to create SQL procedures. The first form is obviously not suitable for a procedure, since procedures don't have a return value.

We can easily rewrite our example function to use the new syntax:

Note that these new SQL functions can be inlined into SQL statements just like the old ones!

Advantages of the new SQL function syntax

The main difference is that the new-style SQL functions and procedures are parsed at function definition time and stored in parsed form in the prosqlbody column of the pg_proc system catalog. As a consequence, the two shortcomings noted above are gone:

Dependency tracking with new-style SQL functions

Because the function body is available in parsed form, PostgreSQL can track dependencies. Let's try that with our redefined example function:

Fixed search_path with new-style SQL functions

search_path is only relevant when SQL is parsed. Since this now happens when CREATE FUNCTION runs, we don't have to worry about the current setting of that parameter at function execution time:

Problems with interactive clients

You may notice that the multi-statement form for defining SQL functions contains semicolons to terminate the SQL statements. That will not only confuse the usual suspects like HeidiSQL (which never learned dollar quoting), but it will be a problem for any client that recognizes semicolons as separator between SQL statements. Even older versions of psql have a problem with that syntax:

psql thinks that the semicolon after “SELECT 42” terminates the CREATE FUNCTION statement. The truncated statement causes an error. The final END is treated as its own statement, which is a synonym for COMMIT and causes a warning.

In v14 and above, psql handles such statements correctly. pgAdmin 4 has learned the new syntax with version 6.3. But I am sure that there are many clients out there that have not got the message yet.

Conclusion

The new syntax for SQL function introduced by PostgreSQL v14 has great advantages for usability and security. Get a client that supports the new syntax and start using it for your SQL functions. You should consider rewriting your existing functions to make use of these benefits.

Read another great post to increase your PostgreSQL syntax savvy: my post on Cross Join in PostgreSQL.

Find out more about how to get the most performance out of your PostgreSQL database with Hans' post on how to find and fix a missing index.

For PostgreSQL powerusers, automating repeated steps is becoming more and more necessary, and gexec can help. This blog will show you how to use the || operator and the gexec command to avoid unnecessary repetition in your workflow.

The CLI client that ships with PostgreSQL is called psql. Like many CLI clients, it is often overlooked and replaced with something with a GUI, or it is only used for the most basic tasks, while more complex operations are carried out elsewhere. However, psql is a very capable tool with lots of useful features.

One common pattern is the need to run the same command with different arguments. Often, users simply rewrite the command over and over, or sometimes they may opt to use a text editor to write the command once, then copy and paste and edit it to accommodate different arguments.

Sometimes it can be useful to automate such steps, not only in the interest of saving time, but also in the interest of avoiding errors due to typos or copy-pasting. PostgreSQL can take the results of queries and add text to create commands with those results as arguments.
For this purpose, we can prepend or append text to any query result using the || operator.

Exercise 1: using the || operator

Let's assume a new user needs access to some tables in a schema, e.g. all those tables that match a certain prefix.
Now, we could do this manually, or ask the database to automate the boring stuff.

1. Let's retrieve the relevant tables with names starting with pgbench

2. Let's use || to prepend and append command fragments to create a valid command with the tablename as a parameter.

Note that the strings end or begin with additional spaces, as the tablename itself does not contain the necessary spaces for argument separation. The semicolon ; was also added so these commands could be run straight away.

Please keep in mind that, while it is convenient to use || to concatenate things, it is not considered good practice, as it can be vulnerable to SQL injection attacks, as a helpful commenter detailed below:

Do NOT blindly concatenate table names with queries. Use quote_ident(), or format() with %I, instead. These apply correct escaping as necessary.

A safer approach to achieve the same results would be something like this:

Now, these commands could be copied and then pasted straight into the prompt.
I've even seen people take such lines, store them into a file and then have psql execute all commands from the file.

But thankfully, a much easier way exists.

gexec

In psql, there are many shortcuts and helpers to quickly gather info about the database, schemas, tables, privileges and much more.
The psql shell allows for working on the input and output buffers, and this can be used together with gexec to have psql execute each command from the output buffer.

Exercise 2: calling gexec

Reusing the query to generate the necessary commands, we can call gexec to execute each line from the previous output.

Exercise 3: a cross join with gexec

Assuming that you want to do something involving more arguments, you can always add more || to add more command fragments around the results from a query.
Suppose you need to grant privileges to insert, update, and delete from those tables as well.

A simple cross join gives us the desired action (constructed as a relation using the VALUES constructor) for each of the table names.

Note that we explicitly assign the action column name using AS t(action) to the table generated using VALUES.

This output can then again be executed using gexec.

Exercise 4: adding quotes

Depending on the circumstances, it may be required to add additional quotes to the output, for example when table names contain capitalization or spaces. In such cases, matching double quotes " can be added to the strings prepended and appended to arguments.

Now that you know how to use gexec, why not take the next step? Take a look at our blog on column order in PostgreSQL to see it used in another practical example.

If you would like to learn more about security in PostgreSQL, see my blog about Transport Layer Security.


Find out about jobs at CYBERTEC PostgreSQL International - we are hiring!

A free tool to migrate Oracle databases to PostgreSQL

We at CYBERTEC are proud to release the CYBERTEC Migrator Standard Edition, the fastest tool to migrate a database to PostgreSQL. And the best part of it is-

The CYBERTEC Migrator Standard Edition

is available free of charge

What? Another free (as in beer) tool to migrate databases to PostgreSQL? We already have free tools to migrate Oracle databases to PostgreSQL and they are even open source. So why should I care?

If you are not a curious person and don’t care about the why, skip the next section and jump to the interesting part of this blog post where we walk you through the migration of Oracle's HR schema using the CYBERTEC Migrator Standard Edition. In case you don't have the time to read the article, watch the screen cast.

This article is going to be the first of a series of blog posts about how to migrate from Oracle to PostgreSQL. The articles will cover the challenges you may face, possible solutions and how the CYBERTEC Migrator reduces both your effort and the risks of botched migrations.

migrator logo

Here are some links to help you navigate the post:

A History of the CYBERTEC Migrator

So you haven’t skipped this section and are curious. Why did we develop the CYBERTEC Migrator?

The simplest answer would be to have an additional option on how to migrate a database to PostgreSQL. And we all agree on one truth: “A world with one Oracle database less and one PostgreSQL database more is a better world”.

As you may know, we at CYBERTEC use open source tools and even contribute to the open source community. One of our most used software products - the Foreign Data Wrapper for Oracle - is used by many open source projects.

So why develop the Migrator?

At the beginning there was speed! Or better, the lack of speed. "Necessity is the mother of invention."

Improving our own lives

Here at CYBERTEC we often see customers with extreme requirements. We assume that’s probably why they come to us, because people out there tell them “If someone knows how to do this with PostgreSQL, and is crazy enough to try it out, try CYBERTEC!”. We are their last hope to solve their problems.

Long story short - some of our customers have databases in the terabyte regions. And they ask for ways to reduce the service downtime imposed by a database migration.

Previously, we didn’t have the resources to implement an asynchronous data replication pipeline. (Such a pipeline captures data changes in the source database during the migration, and synchronizes those changes to PostgreSQL after the migration finishes).

Reducing Migration Time

This left us with the option to reduce migration time by improving the speed of the data transfer. The solution to this problem was ora_migrator, a plugin based on our Foreign Data Wrapper for Oracle. But this solution had some disadvantages:

Getting it Right

Looking back at what features we implemented, the why and the how,  we noticed another motivation for the existence of our Migrator: convenience.

We do not like to do repetitive, boring stuff. We want to concentrate on the challenges rather than the kinds of mundane tasks we had to do when we used the existing tooling.

And since we are faced with migrating databases to PostgreSQL on a daily basis, we want to have tools which get the boring stuff out of the way. We want to be productive where it matters, so we need tools which provide a working solution.

Don’t get me wrong, using vim (as everyone knows, the only true editor) makes me productive. But switching between different terminals, exporting code to the file system, checking log files on another machine, and … “what state was the migration in before I left for coffee?” All wrapped up in a plethora of shell and SQL scripts, which at the end may fail because the production system differs from the test system! That’s not productive.

The raison d'etre of the CYBERTEC Migrator is so that customers do not have to know how to map Oracle data types to PostgreSQL or all the details explained in the technical guide on how to migrate from Oracle to PostgreSQL.

Easy, Fast Migration

If you want a tool which frees the cognitive capacity of your brain from the boring stuff so you can use it on porting the database over to PostgreSQL, that’s what the CYBERTEC Migrator is - The easiest way to migrate to PostgreSQL - in a professional way.

And by the way - it is also the fastest way to PostgreSQL!

So why would you give such a tool away for free?

We wanted to provide persons and organisations with small budgets an easy way to migrate their databases from Oracle Express or/and Personal Edition to PostgreSQL. We even bundle an hour of consulting for free to provide them with the impetus to migrate to PostgreSQL.

The CYBERTEC Migrator Professional and Enterprise Edition are meant for customers with stringent database SLAs and high availability. More details about the differences between the editions can be found on the CYBERTEC Migrator web page.

Easy Installation

First you'll see how to install the CYBERTEC Migrator Standard Edition. Then, you'll walk you through the migration of Oracle’s HR demo database to PostgreSQL.

The CYBERTEC Migrator is a so-called web application, which means its main functionality, the Migrator core, runs on a server. The user configures and controls the database migration via a graphical user interface that runs on a web browser.

As we mentioned above, one of the hassles we had with existing tools was a complicated installation process. The CYBERTEC Migrator is deployed as a set of container images. CYBERTEC provides a public Github repository (cybertec_migrator) which facilitates the installation process. We use Docker Compose to configure and orchestrate the containers. The images for the Migrator Professional and Enterprise Edition are pulled from the Docker Hub container registry.

As a side note - we also have customers who run the Migrator on Kubernetes in their private cloud, mainly on OpenShift.

For an installation on an air gapped system we provide an archive file which contains the git repository mentioned above including the container images. The CYBERTEC Migrator Standard Edition is available as an archive file. Visit this CYBERTEC Migrator link to get it.

Requirements for running the CYBERTEC Migrator Standard Edition

Desktop systems like Docker Desktop for Mac and Windows include Docker Engine and Docker Compose as part of those desktop installs. If you use an MS Windows desktop system, install Docker for Desktop with the WSL 2 backend.

Git comes installed with most of the Windows Subsystem for Linux distributions. Should this not be the case, open a shell in Ubuntu/Debian and enter the command:

Assuming all preconditions are met, follow the instructions provided in the offline installation section on the Migrator's installation page:

  1. Extract the downloaded archive file, change into the extracted directory
  2. Generate the default configuration
  3. Import the container images from the archive file
  4. Start the Migrator

Note: the Migrator version and the hashes may differ from the file you downloaded.

Host name

orcus is the name of the host, it will differ from the one you installed. Open the web browser of your choice and visit the URL shown in the terminal, in this case http://orcus. If you can’t reach the Migrator on the URL provided due to name resolution problems in your Docker setup, try http://localhost which should work.

You probably are asking yourself, “What? HTTP is not secure!”

For the sake of simplicity we do not use HTTPS in this demo run. It would just complicate the setup and configuration process and not contribute to the goal of this article. In a production environment, you would configure the NGINX server providing the Web GUI to use your SSL certificates.

CYBERTEC Migrator Standard Edition Dashboard

If all went well, you should see the Migrator Dashboard which provides the status of CPU's and RAM of the host where the Migrator is running. The dashboard also displays the most recently changed migrations. Since we haven’t created one yet, the page is empty.

CYBERTEC Migrator empty dashboard

In case the Migrator failed to start with an error Bind for 0.0.0.0:80 failed: port is already allocated you have to change the default port configuration since the default port 80 is already in use. Change the value of the environment variable EXTERNAL_HTTP_PORT in the .env file located in the directory to an unused port.

Migrator installed and running - check.

Set up Demo Environment for the CYBERTEC Migrator Standard Edition

For the demo, we need an Oracle database containing the HR sample schema and a PostgreSQL database. In case you have access to both, you are set up for the demo and you can skip to the next section, Database Users and Privileges.

If you don't have access to both databases servers, we provide a git repository containing a helper script demo-env which facilitates the setup of the demo environment. Clone the git repository of the demo database environment:

The following command will start a PostgreSQL server demo_db providing a demo database owned by the demo user, having the password demo. Easy to remember, right?

In case you get an error because the port is already in use, change  the environment variable EXTERNAL_DEMO_POSTGRES_PORT in the demo/.env file.

Source Database

All we are missing now is the Oracle source database with the HR schema. The following command is going to pull Oracle’s Express Edition provided by Oracle's Container Registry. Be aware that you need around 5 GB for the Oracle container. And yes, Oracle is bigger and slower than an elephant.

We have to wait until Oracle is up and running, showing the status healthy.

Once again, in case the Oracle default port 1521 is already occupied, the command above will fail. You know the drill, adjust the environment variable EXTERNAL_DEMO_ORACLE_PORT in the directory demo/.env.

The output shows the mapped ports for the various services:

Both database services started by the demo environment do not use volume containers. In case you're not familiar with containers, it means the data is not persisted. On restarting the demo environment, you end up with an empty PostgreSQL demo database.

After you finish the demo, you can shut down the demo environment with

Database Users and Privileges in the CYBERTEC Migrator Standard Edition

The Oracle database user we are going to employ needs read-only privileges. No changes to the source database are necessary when migrating the database:

The demo environment from the previous chapter provides a user "migrator" with the above privileges.

For the target database, the CYBERTEC Migrator expects to have a PostgreSQL database and a user with CREATE privilege on the database. The demo user provided with the database demo environment was created with following statements:

If you use the demo environment, we are ready to go.

Migrating Oracle HR

So, the Migrator is installed, the demo environment is set up. Let's migrate Oracle’s HR demo database to PostgreSQL. We start by creating a migration.

Use the CYBERTEC Migrator Standard Edition to Create a Migration

On the left side of the Dashboard select MIGRATION to change to the migrations page. Select the button “ADD NEW MIGRATION” which opens the wizard for creating a migration.

Provide the information for the Source Connection

In case you don't want to use the database demo environment, you will need to adjust the values to your environment.

Selecting CHECK performs a connectivity check from the Migrator core with the source connection provided. If the connection is successful, we can proceed to the Schema Selection with NEXT.

Create Migration: Source Connection - Migrator

In case the connectivity check fails, an error message is displayed. Check the connection string and your credentials. In case the database user is missing privileges, they are shown.

Select the Schemas

Let’s select the Oracle HR demo schema we want to migrate, and proceed with NEXT to provide the Target Connection.

Create Migration - Schema Selection

Provide the Target Connection

Now we have to provide the connection to the target database and check the connectivity:

Create Migration - Target Connection - CYBERTEC Migrator

On selecting NEXT the Migrator reads the meta-data from the Oracle system catalogs. Assuming we don’t encounter an unexpected error (yes, sometimes Oracle databases have a screwed up system catalog) the Migrator persists the meta-data of the source database and creates a configuration based on this meta-data.

The migration configuration is later used to migrate the data structure to PostgreSQL. It takes care of mapping data types, table structure, constraints and default values, adjusting indexes and much more.

The creation of the migration configuration is the result of the cumulative knowledge gathered over years of migrating hundreds of databases to PostgreSQL.

Create Migration - Analyze - CYBERTEC Migrator

As you see in the screenshot above, one of the few features the Standard Edition does not contain is the migration of partitioned tables. The Standard Edition will migrate partitioned tables as normal tables.

Selecting NEXT creates a migration for the HR schema and we end up in the migration overview page.

What is a Migration composed of?

When we create a migration, a set of data consists of:

Note that once a migration is created, you can not change the source, nor the target database, nor the meta-data of the source database. This is an important concept which has an impact on how to roll out migrations to production systems. The life cycle of a migration and how it is used merits its own article.

Migration Page of the CYBERTEC Migrator Standard Edition

The screenshot below shows the landing page of the newly created migration for the HR schema.

Migration - Landing Page HR Schema Overview - CYBERTEC Migrator

The migration page is divided into four areas:

For now, let's take a detailed look at the overview page.

The CYBERTEC Migrator's Overview Tab

The Overview tab shows analytical information about the source database. The left hand side shows a table with the database objects grouped by object type, their cardinality and the stage (more about stages in the next sections) and the order in which they are going to be migrated. The right hand side of the overview page provides information about the column data types and their cardinality. In both tables we may drill down to get more specific information.

Our PostgreSQL experts are able to use the Overview page to make a rough assessment of how much effort it will take to migrate the database.

Wait, what? Do we need an expert to make the assessment?

Don’t worry - it is on our roadmap and we are currently working on it. The migration assessment will include our 20 years of knowledge of how to migrate databases to PostgreSQL, including an analysis of the PL/SQL code. Coming soon!

If you want to know what we are currently working on, check out the CYBERTEC Migrator Feature Roadmap at the end of this article. But enough about “what's going to be” - let’s start with the migration.

Just Migrate

Selecting the Stages tab will lead you to the heart of our Migrator. This is the place where you run, change, resume and replay a migration.

Migration Stages Tab - CYBERTEC Migrator

Before we start with the migration of the HR schema, a word about migration stages. The CYBERTEC Migrator divides the execution of a migration into so-called migration stages, natural synchronization points in the migration process:

Now for a demonstration...

Select the START button which - you guessed right - starts the migration. Starting a migration job will automatically open the Log view and follow the log messages showing what’s currently happening. Migrating the HR schema we end up with an error at the Logic stage.

Migration Logic Stage with Error - CYBERTEC Migrator

In the default settings, starting a migration executes all the stages until it succeeds with a complete migration or it reaches the first error. In our example, the Migrator executed Structure, Data and Integrity stages successfully. However, it hit an error in the Logic stage on migrating procedure:”HR”.”ADD_JOB_HISTORY”. This means our PostgreSQL demo database contains the migrated tables, with the data, constraints and indexes.

Data Migration

The HR schema contains only a few rows of data. This means the Data and Integrity stage are fast.

The takeaways here are:

The CYBERTEC Migrator Standard Edition transfers at maximum three tables in parallel. The same goes for creating indexes and foreign keys. If you need a faster migration, you can upgrade to the Migrator Professional Edition.

Jump to Errors

Back to our demo to the log view.

PostrgreSQL Migration detailed error message - CYBERTEC Migrator

Reading the the Log view from the bottom up, we are informed about

Note that all DBO's in the log view are hyperlinked. Selecting the hyperlink for the procedure:”HR”.”ADD_JOB_HISTORY” opens the code editor for the stored procedure. Hyperlinking error messages to the DBO that caused them saves me at least a couple weeks of work (and frustration) each year.

Code Editors Showing Errors

The screenshot below shows the code editor of the erroneous procedure.

Code Editor Showing Error - Migrator

On the top of the editor we see the DBO’s fully qualified name as provided in the source database. In our case HR > ADD_JOB_HISTORY.  The code editor contains the PL/SQL extracted from the Oracle source database, which visually marks the same syntax error reported by the migration.

So why this error?  Anyone with experience porting code from Oracle PL/SQL to PL/pgSQL knows that the SQL procedural languages between database technologies are not 100% compatible. So Oracle's PL/SQL code has to be translated to PL/pgSQL, PostgreSQL’s SQL procedural language.

But Max, why doesn't the CYBERTEC Migrator translate PL/SQL code to PL/pgSQL automatically?

The answer is simple. Our in-house PostgreSQL experts are really skeptical about automatic code conversion tools. This probably stems from the situation that at CYBERTEC, we mostly deal with databases with complex PL/SQL code. To achieve the quality of code translation we would like to have, we would need an Oracle PL/SQL parser - which you guessed right - is on our feature roadmap.

That’s why our experts use the Migrator Search and Replace feature.

What to do about it for the time being?

For now, let’s change the code in the editor by replacing “IS BEGIN” with “LANGUAGE SQL AS $$” and “END add_job_history;” with “$$” (without double quotes). Selecting SHOW DIFFERENCE shows the difference between the original Oracle code and the changed procedure code.

PostgreSQL Migration code error with diff - CYBERTEC Migrator

What you do not see is what happens behind the scenes. In the background, the CYBERTEC Migrator Standard Edition validates the content of the code editor against the target database. It drops and recreates the procedure and in case of an error we show it in the editor. Don’t worry about unwanted changes, due to PostgreSQL transactional DDL's, the validation is performed in a transaction with rollback.

The background validation is maybe not so much of use in a stored procedure, but saves a lot of time when adapting views. It prevents unnecessary round trips due to typos in column or table names.

Save the changes by selecting the SAVE button, which persists the changes we just made in the Migrator’s internal database. Leave the editor and go back to the "Stages" tab by selecting on top of the page, near the migration name.

So how do we proceed?

The migration controls provide us with the following options:

PostgreSQL Migration - Resume migration with controls - CYBERTEC Migrator

We want to resume the migration where it was stopped due to the first error. To avoid the “resume, show error, fix error”-cycle over and over again, we disable “Abort stage on first error”. This will execute the stage until its end and report all detected errors.

Select RESUME. This will create the fixed procedure but leave us with errors in another procedure, view and trigger.

PostgreSQL Migration - show all errors - CYBERTEC Migrator

Let's fix the trigger and select the link in the error log line of trigger:”HR”.”UPDATE_JOB_HISTORY” which opens the code editor.

Search and Replace

An Oracle trigger is migrated to PostgreSQL by wrapping the trigger code into a PostgreSQL function which is called by the trigger. The function name matches the trigger name.

The screenshot below shows an error near the call to add_job_history.

PostgreSQL Migration - syntax error - CYBERTEC Migrator

PostgreSQL is picky when calling a function with no return value. So let’s change the code to PERFORM add_job_history. This will lead to the editor complaining about a syntax error at or near “:”.

PostgreSQL Migration - syntax error trigger - CYBERTEC Migrator

:old, respective :new are unknown variables to PostgreSQL data triggers. Since this shows up multiple times, we are going to use the Search and Replace functionality. On the bottom of the migration window open the Search view.

We are going to use regular expressions to replace all occurrences of :old with old, respectively :new with new.

Select the "regex" mode by selecting .* and search for :(new|old). This uses regex capture groups to capture the parameter $1 which we use as a replacement, removing the colon.

PostgreSQL Migration - Search and Replace - CYBERTEC MIgrator

In the HR schema we have only one trigger function, so not much to replace. Note that the scope for the Search and Replace operation can be tweaked on the right hand side. The use of the Search and Replace feature deserves its own article.

Select Replace All fixes the rest of the errors in the trigger function.

Fix Remaining Errors

Let’s fix the remaining errors. Switch back to the Log view and select the view:"HR"."EMP_DETAILS_VIEW" link, which opens the code editor with the erroneous view. Scroll to the bottom and remove the last line WITH READ ONLY, which has no equivalent in PostgreSQL.

I already mentioned the code validation performed by the migrator against the database. Try to change a table name in the view and see what happens.

PostgreSQL Migratio - code validation - CYBERTEC Migrator

Undo the change to the table name (CTRL-z) so the view works again, and select the last erroneous DBO procedure:"HR"."SECURE_DML" in the log view.

Let's assume we are not interested in migrating this procedure and exclude it from the migration by enabling Exclude in the editor's upper right corner. The sidebar shows the exclusion of the procedure with SECURE_DML.

PostgreSQL Migration - resume with floating control - CYBERTEC Migrator

Now we SAVE our changes and resume the execution. Try the floating migration control on the right side of the screen to resume the migration. It is handy when you don’t want to return to the Stages tab.

Congratulations - you just migrated your first database using the CYBERTEC Migrator. If you go back to the migration page all your stages will show up green.

PostgreSQL Migration - Successful - CYBERTEC Migrator

If needed you can download the content of the log view by selecting the for audit purposes. Yes, some customers need this.

Additional Features of the CYBERTEC Migrator Standard Edition

If you reached this part of the article and you followed all the steps - kudos.

One important part left to demonstrate is the table editor. Selecting one of the tables on the sidebar will open the table editor. This is the place where you can change the database objects which end up in PostgreSQL.

PostgreSQL Migration - additional features table editor - CYBERTEC Migrator

There you can exclude one or more columns from the table, or the whole table. Change the data type, the nullable attribute, default values. Constraints, indices, triggers and partitions can be adapted.

PostgreSQL Migration - select from Table editor - CYBERTEC Migrator

There are a couple of features left to mention. These will be featured in future articles:

CYBERTEC Migrator Feature Roadmap

We at CYBERTEC are constantly improving the Migrator. The priority of the features included depends on customer feedback and the migration projects CYBERTEC is currently working on.

Our paying customers have access to the monthly feature releases of the Migrator.

Here's just a teaser of what we have planned for 2023:

How many of these features and in which order we can implement them depends on the resources we can reserve.

Download the CYBERTEC Migrator Standard Edition FREE

If you want to help the PostgreSQL community, download the free Migrator, try it out, provide feedback and spread the word about a new tool to migrate databases to PostgreSQL.

[button href="https://www.cybertec-postgresql.com/en/products/cybertec-migrator/#form"] Download the CYBERTEC Migrator Standard Edition>>[/button]


Please leave your comments below. In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on TwitterFacebook, or LinkedIn.

I've recently seen some really broad tables (hundreds of columns) in a somewhat inefficiently structured database. Our PostgreSQL support customer complained about strange runtime behavior which could not be easily explained. To help other PostgreSQL users in this same situation, I decided to reveal the secrets of a fairly common performance problem many people don’t understand: Column order and column access.

Creating a large table

The first question is: How can we create a table containing many columns? The easiest way is to simply generate the CREATE TABLE statement using generate_series:

For the sake of simplicity I have only used 4 columns here. Once the command has been generated we can use gexec to execute the string we have just compiled. gexec is a really powerful thing: It treats the previous result as SQL input which is exactly what we want here. It leaves us with a table containing 4 columns.

However, let's drop the table and create a really large one.

 

Create an extremely wide table

The following statement creates a table containing 1500 columns. Mind that the upper limit is 1600 columns:

In real life such a table is far from efficient and should usually not be used to store data. It will simply create too much overhead and in most cases it is not good modelling in the first place.

Let's populate the table and add 1 million rows:

Note that the table has default values so we can be sure that those columns actually contain something. Finally I have executed VACUUM to make sure that all hint bits and alike are set.

The table we have just created is roughly 4 GB in size which can easily be determined using the following line:

Accessing various columns

PostgreSQL stores data in rows. As you might know data can be stored column- or row-oriented. Depending on your use case one or the other option might be beneficial. In the case of OLTP a row-based approach is usually far more efficient.

Let's do a count(*) and see how long it takes:

We can run the query in around 400 ms which is quite ok. As expected, the optimizer will go for a parallel sequential scan:

Let's compare this to a count on the first column. You'll see a small difference in performance. The reason is that count(*) has to check for the existence of the row while count(column) has to check if a NULL value is fed to the aggregate or not. In case of NULL the value has to be ignored:

But, let's see what happens if we access column number 100? The time to do that will differ significantly:

The execution time has basically doubled. The performance is even worse if we do a count on column number 1000:

Wow, we are already 20 times slower than before. This is not a small difference but a major problem which has to be understood.

Debunking PostgreSQL performance issues: column order

To understand why the problem happens in the first place we need to take a look at how PostgreSQL stores data: After the tuple header which is present in every row we got a couple of varchar columns. We just used varchar here to prove the point. The same issues will happen with other data types - the problem is simply more apparent with varchar as it is more complicated internally than, say, integer.

How does PostgreSQL access a column? It will fetch the row and then dissect this tuple to calculate the position of the desired column inside the row. So if we want to access column #1000 it means that we have to figure out how long those first 999 columns before our chosen one really are. This can be quite complex. For integer we simply have to add 4, but in case of varchar, the operation turns into something really expensive. Let's inspect how PostgreSQL stores varchar (just to see why it is so expensive):

Now imagine what that means if we need to loop over 1000 columns? It does create some non-trivial overhead.

Finally …

The key insight here is that using extremely large tables is often not beneficial from a performance standpoint. It makes sense to use sensible table layouts to have a good compromise between performance and convenience.

If you are interested in other ways to improve performance, read my blog on CLUSTER.

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.

Bonus cards, “Miles & more”, bonus points - don’t we all love and hate them at the same time? Recently we had an interesting use case which made me think about sharing some of the techniques we used in this area to reduce client code by writing some clever SQL. This post will show you how to efficiently code bonus programs in SQL.

Getting started coding bonus programs in SQL

Suppose we want to run a bonus program. What we want is to know how many bonus points somebody had at any given point in time. This is how we might want to store the data:

 

For each bonus card, we want to store how many points were awarded when. So far this is relatively easy. Let's load some sample data:

 

In my example, we have data for two bonus cards which receive some rewards from time to time. To run our bonus program using PostgreSQL, we might want to answer some basic questions:

Let's answer these questions using…

Windowing functions and advanced frame clauses for bonus programs in SQL

To answer all these questions we can use windowing functions along with some advanced, fancy frame clauses. Let's take a look at a basic query:

What this does is simple: It goes through our data set line by line (sorted by date). Then it checks if there are rows between our current row and a value 6 months earlier. For debugging purposes, we aggregate those values into an array. What we see is that on June 7th we have 5 entries. But keep in mind: The rules of our bonus program say that points awarded are taken away after 6 months. By using a sliding window, we can easily achieve this goal.

Using RANGE

Note that in SQL we have “ROWS”, “RANGE” and “GROUP” as possible keywords in our frame clause. ROWS means that we want to see a specific number of older rows in our frame. However, this makes no sense here - what we need is an interval and this is exactly what RANGE can do for us. Rewards might be granted at random points in time so we certainly need to operate with intervals here.

The array_agg function is really useful to debug things. However, in a real world scenario, we need to add up those numbers using sum:

 

We have seen that points drop in 2023 again. That's exactly what we wanted.

Windowing for bonus programs in SQL: PARTITION BY

Maybe you have noticed that we did the entire calculation for just one card number. However, what has to be done to make this work for any number of cards? The answer is PARTITION BY:

PARTITION BY card_number ensures that our calculations are done for each incarnation of card_number separately. In other words: User A’s points cannot be mixed with user B’s points anymore. But there is more to this query: We want that at the beginning of every year those points should be set to zero and counting should resume. We can achieve this by using PARTITION BY as well. By rounding out dates to full years we can use the year as partition criteria.

As you can see, SQL is really powerful. A lot can be done without having to write a single line of client code. A handful of SQL statements can produce terrific results and it makes sense to leverage your application.

Finally …

If you want to know more about PostgreSQL 15 and if you are interested in merging data, check out my post about MERGE - which can be found here.

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram