Why migrate to pg_timetable from pg_cron?

There are a lot of reasons you may want to migrate from pg_cron:

  • maybe you need to run a scheduler on a platform pg_cron doesn’t support;
  • or you want to run several parallel schedulers simultaneously;
  • it’s possible you hit the bugs in pg_cron that haven’t been fixed for ages (#64, #96);
  • or you are limited by the debugging facilities pg_cron provides;
  • it might be that you think the pg_cron project is obsolete and deprecated after its acquisition by Microsoft;
  • or maybe you need to implement a complex chain of tasks instead of a simple SQL statement.

There are many reasons, actually. Most of them may be spotted by taking a look at the PostgreSQL schedulers comparison table I’ve introduced in my previous post and the supported platform table from the official readme.


  • You have installed the pg_cron extension in your database.
  • You’ve scheduled pg_cron jobs to export.
  • You have created the pg_timetable schema by running the pg_timetable against this database at least once. The schema will be created automatically. It doesn’t matter if you already added some chains or not.

Straightforward solution

If you want to quickly export jobs scheduled from pg_cron to pg_timetable, you can use this SQL snippet:

SELECT timetable.add_job(
    job_name            => COALESCE(jobname, 'job: ' || command),
    job_schedule        => schedule,
    job_command         => command,
    job_kind            => 'SQL',
    job_live            => active
) FROM cron.job;

The timetable.add_job(), however, has some limitations. First, the function will mark the task created as autonomous, specifying that the scheduler should execute the task out of the chain transaction. It’s not an error, but many autonomous chains may cause some extra connections to be used.

Secondly, database connection parameters are lost for source pg_cron jobs, making all jobs local.

The proper pg_cron to pg_timetable migration script

To export every piece of information available as precisely as possible, use this SQL snippet according to the role they were scheduled under in pg_cron:

SET ROLE 'scheduler'; -- set the role used by pg_cron

WITH cron_chain AS (
        nextval('timetable.chain_chain_id_seq'::regclass) AS cron_id,
        CASE WHEN 
            database != current_database()
            OR nodename != 'localhost'
            OR username != CURRENT_USER
            OR nodeport != inet_server_port() 
            format('host=%s port=%s dbname=%s user=%s', nodename, nodeport, database, username)
        END AS connstr
cte_chain AS (
	INSERT INTO timetable.chain (chain_id, chain_name, run_at, live)
	    	cron_id, COALESCE(jobname, 'cronjob' || cron_id), schedule, active
cte_tasks AS (
	INSERT INTO timetable.task (chain_id, task_order, kind, command, database_connection)
	    	cron_id, 1, 'SQL', command, connstr
	        chain_id, task_id
SELECT * FROM cte_tasks;

Let’s go line by line through this script together:

  1. SET ROLE will execute the code under the pg_cron user, the owner of scheduled tasks. It’s important because we will check the CURRENT_USER later in the code.
  2. In the first CTE clause cron_chain, we prepare everything from the pg_cron.job system table. If some connection parameters differ from default values, we assume this job is configured against a remote database. In this case, we compose the default PostgreSQL connection string.
  3. The second cte_chain is simple enough. The only thing here is we must be sure the chain name is not an empty string. pg_cron allows its job names to be omitted, but pg_timetable is more strict about that.
  4. The rest of the statement is absolutely straightforward. Just add information about the command to execute and the connection string to the remote database if needed.


We have a new v4.7 official release available. The full changelog is available on the v4.7 release page. We want to thank all contributors and users for their help.

If you want to contribute to pg_timetable and help to make it better:

In conclusion, I wish you all the best! ♥️

One more thing. I want to thank everyone here in the Cybertec and the PostgreSQL community for the support you provided to Ukrainians and me personally! #StandWithUkraine 💙💛

I hope we can meet in person at one of the conferences, meetups, or training sessions!

Please leave your comments below. In order to receive regular updates on important changes in PostgreSQL, subscribe to the CYBERTEC newsletter, or follow us on TwitterFacebook, or LinkedIn