UPDATED 21 March 2023: Tracking database changes and tracing users has always been a vitally important part of PostgreSQL database security and application security. Especially when critical data are stored, it might be interesting to know who has changed which data when and how.

Generic Changelog Triggers in PostgreSQL are the key to tracking changes

To track the changes made to tables in PostgreSQL you can write yourself a generic changelog trigger. The easiest way to do that is to write a simple PL/pgSQL function and use it for all tables in the system. As PostgreSQL provides good support for stored procedures, this is definitely not hard to do. In this post you will learn how this can be done and the easiest way to achieve your goal.

Create a table to store some history – tracking changes in PostgreSQL databases

First of all, we need a table to store the changes. For a quick prototype, we can use the following table structure:

CREATE SCHEMA logging;
 
CREATE TABLE logging.t_history (
        id             	serial,
        tstamp         	timestamp   	DEFAULT now(),
        schemaname     	text,
        tabname        	text,
        operation      	text,
        who            	text        	DEFAULT current_user,
        new_val        	jsonb,
        old_val        	jsonb
);

For the sake of simplicity, we didn’t use enumerators (enumerators are used to store data in a more efficient way). What is also important: The data we store should be in the most generic possible format. What we did here was to use the jsonb data type.
The point of this table is to keep track of all changes made to other tables. We want to know which operation has taken place. Whenever we insert, update or delete a row somewhere, we want to keep a record of that operation.

The backbone of tracking changes infrastructure in PostgreSQL

Taking this into account, we come up with the following code:

CREATE FUNCTION change_trigger() RETURNS trigger AS $$
BEGIN
	IF  	TG_OP = 'INSERT'
	THEN
		INSERT INTO logging.t_history (tabname, schemaname, operation, new_val)
			VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP, row_to_json(NEW));
			RETURN NEW;
	ELSIF   TG_OP = 'UPDATE'
	THEN
		INSERT INTO logging.t_history (tabname, schemaname, operation, new_val, old_val)
			VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP,
				row_to_json(NEW), row_to_json(OLD));
		RETURN NEW;
	ELSIF   TG_OP = 'DELETE'
	THEN
		INSERT INTO logging.t_history (tabname, schemaname, operation, old_val)
			VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP, row_to_json(OLD));
		RETURN OLD;
	END IF;
END;
$$ LANGUAGE 'plpgsql' SECURITY DEFINER;

The interesting part here is that the trigger is totally generic. We can use the row_to_json function to encode any table row into a JSON object. The advantage is that we can use the very same trigger for countless tables. NEW and OLD will contain the rows before and after the data modification.
Once we have the backbone in place, we can test things using a table.

Here is an example:

CREATE TABLE t_trig (id int, name text);

Once we have created our tables, we can deploy triggers to do the real work:

CREATE TRIGGER t 
AFTER INSERT OR UPDATE OR DELETE ON t_trig
        FOR EACH ROW EXECUTE PROCEDURE change_trigger();

What is important here is that we are using an AFTER trigger. We have two choices: BEFORE and AFTER. One trigger fires before the row really makes it into the table. The other one will fire after the row has made it into the table.
The main risk here is that if multiple triggers exist on the same table, we have to make sure that the order of firing is correct: In PostgreSQL (since version 7.3), all BEFORE triggers will fire in alphabetical order (of the trigger name), and then all AFTER triggers will fire in alphabetical order. This has to be taken into account when you have more than one trigger.

Let’s test the trigger and see what happens:

INSERT INTO t_trig VALUES (1, 'hans'); 
UPDATE t_trig SET id = 10 * id, name = 'paul';

Our history table will contain all changes we have made to the underlying tables:

-[ RECORD 1 ]--------------------------
id         | 1
tstamp     | 2023-02-27 10:25:37.530579
schemaname | public
tabname    | t_trig
operation  | INSERT
who        | hs
new_val    | {"id": 1, "name": "hans"}
old_val    | 
-[ RECORD 2 ]--------------------------
id         | 2
tstamp     | 2023-02-27 10:25:37.530579
schemaname | public
tabname    | t_trig
operation  | UPDATE
who        | hs
new_val    | {"id": 10, "name": "paul"}
old_val    | {"id": 1, "name": "hans"}

Security considerations when tracking changes in a PostgreSQL database

If we assume that we use this mechanism to track changes to protect our data, we have to make sure that somebody changing the data cannot change the log as well. Therefore, we have to take precautions for this special case. One way to do this is to mark our trigger function as SECURITY DEFINER. This means that the function itself is not executed as the user making the change to the table, but as the user who has written the function. If the superuser deploys the trigger, we can protect ourselves against evil action.

Finally …

Quite often, tracking changes to the database is an important issue for many people. We hope the code listed above is a blueprint which will help you to get started quickly.

If you want to find out more about PostgreSQL, learn how to write SECURITY DEFINER functions securely – read this blog: https://www.cybertec-postgresql.com/en/abusing-security-definer-functions/

 


In order to receive regular updates on important changes in PostgreSQL, subscribe to our newsletter, or follow us on Twitter, Facebook, or LinkedIn.