The importance of backups is universally acknowledged in our digital world. One of my former esteemed colleagues told me that he can't think of a database system without backups. This perspective has resonated with me over the years. Today, in our CNPG series, we will be taking a look at the backup solution and how we restore a database from the backup.
Table of Contents
Backing up PostgreSQL in a Kubernetes-native way is easy with the CloudNativePG (CNPG) operator and the Barman Cloud plugin. So, for the backup, we will be using barman, and as storage, MinIO S3 compatible storage for creating a cloud-native environment.
First, let’s check the version of the CNPG operator to ensure it supports the barman plugin as described here:
1 2 3 4 |
kubectl get deployment -n cnpg-system cnpg-controller-manager -o yaml | grep ghcr.io/cloudnative-pg/cloudnative-pg value: ghcr.io/cloudnative-pg/cloudnative-pg:1.26.0 image: ghcr.io/cloudnative-pg/cloudnative-pg:1.26.0 |
At the moment, CNPG officially supports barman. That is why we need to apply the official plugin manifest:
1 2 3 4 5 6 7 8 9 10 |
kubectl apply -f https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.5.0/manifest.yaml https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.5.0/manifest.yaml -barman-cloud-gt85cmh99d created service/barman-cloud created . . . in version "cert-manager.io/v1" ensure CRDs are installed first |
You’ll see a number of resources created, including:
ObjectStore
barman-cloud
deploymentCheck rollout status:
1 |
kubectl rollout status deployment -n cnpg-system barman-clouddeployment "barman-cloud" successfully rolled out |
Here’s the full manifest used for a single-instance PostgreSQL cluster with plugin-based backups using MinIO:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-backup spec: instances: 1 plugins: - name: barman-cloud.cloudnative-pg.io isWALArchiver: true parameters: barmanObjectName: minio-store storage: storageClass: csi-hostpath-sc size: 1Gi --- apiVersion: barmancloud.cnpg.io/v1 kind: ObjectStore metadata: name: minio-store spec: configuration: destinationPath: s3://backups/ endpointURL: http://minio.cnpg-system.svc.cluster.local:9000 s3Credentials: accessKeyId: name: minio key: ACCESS_KEY_ID secretAccessKey: name: minio key: ACCESS_SECRET_KEY wal: compression: gzip --- apiVersion: postgresql.cnpg.io/v1 kind: Backup metadata: name: pg-backup-example spec: cluster: name: cluster-example-backup method: plugin pluginConfiguration: name: barman-cloud.cloudnative-pg.io |
An important thing to mention here is that how we define MinIO storage and access it is pretty similar to S3. Another important thing is that the barman-cloud plugin is responsible for WAL archiving as well as backups. Let' s apply it:
1 2 3 4 5 |
kubectl apply -f minio_backup.yaml cluster.postgresql.cnpg.io/cluster-example-backup created backup.postgresql.cnpg.io/pg-backup-example created objectstore.barmancloud.cnpg.io/minio-store created |
Note: only one plugin can be responsible for archiving at a time.
Verify all pods are running and confirm the backups:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
kubectl get all NAME READY STATUS RESTARTS AGE pod/barman-cloud-fbf54687f-r45v9 1/1 Running 0 12m pod/cluster-example-backup-1 2/2 Running 0 55s pod/cnpg-controller-manager-6848689f4-ct2t6 1/1 Running 0 13m pod/minio 1/1 Running 0 10m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/barman-cloud ClusterIP 10.105.172.20 <none> 9090/TCP 12m service/cluster-example-backup-r ClusterIP 10.96.246.116 <none> 5432/TCP 73s service/cluster-example-backup-ro ClusterIP 10.102.74.15 <none> 5432/TCP 73s service/cluster-example-backup-rw ClusterIP 10.102.96.110 <none> 5432/TCP 73s service/cnpg-webhook-service ClusterIP 10.111.147.237 <none> 443/TCP 13m service/minio NodePort 10.111.215.241 <none> 9000:30000/TCP 10m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/barman-cloud 1/1 1 1 12m deployment.apps/cnpg-controller-manager 1/1 1 1 13m NAME DESIRED CURRENT READY AGE replicaset.apps/barman-cloud-fbf54687f 1 1 1 12m replicaset.apps/cnpg-controller-manager-6848689f4 1 1 1 13m kubectl get backup NAME AGE CLUSTER METHOD PHASE ERROR pg-backup-example 67s cluster-example-backup plugin started kubectl get backup NAME AGE CLUSTER METHOD PHASE ERROR pg-backup-example 74s cluster-example-backup plugin completed |
During the verification, it is important to emphasize two things:
Let's create a simple test database for recovery:
1 2 |
kubectl exec -it pod/cluster-example-backup-1 -- \ psql -U postgres -c "CREATE DATABASE cybertec;" |
We can create a backup schedule now. Here’s how to run backups every 5 seconds (for demo purposes):
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: postgresql.cnpg.io/v1 kind: ScheduledBackup metadata: name: scheduled-backup-example spec: schedule: "*/5 * * * *" immediate: true backupOwnerReference: self cluster: name: cluster-example-backup method: plugin pluginConfiguration: name: barman-cloud.cloudnative-pg.io |
Apply it:
1 2 3 4 5 6 7 |
kubectl get backup NAME AGE CLUSTER METHOD PHASE ERROR pg-backup-example 16m cluster-example-backup plugin completed scheduled-backup-example-20250611141840 19s cluster-example-backup plugin completed scheduled-backup-example-20250611141845 14s cluster-example-backup plugin completed scheduled-backup-example-20250611141850 9s cluster-example-backup plugin completed scheduled-backup-example-20250611141855 4s cluster-example-backup plugin started |
Here's the manifest for a new PostgreSQL cluster (this is not an in-place recovery) called cluster-restore
, which will be bootstrapped from the object store, which is minio-store in our case:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-restore spec: instances: 1 storage: size: 1Gi bootstrap: recovery: source: origin externalClusters: - name: origin plugin: name: barman-cloud.cloudnative-pg.io parameters: barmanObjectName: minio-store serverName: cluster-example-backup |
Key parts of this manifest:
bootstrap.recovery.source
: tells CNPG to recover from an external cluster called origin
externalClusters
: defines the recovery sourceApply the restore configuration:
1 |
kubectl apply -f cluster_restore.yaml |
In order to recover a specific backup, we could use:
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-restore spec: instances: 1 bootstrap: recovery: backup: name: scheduled-backup-example-20250611141850 storage: size: 1Gi |
Validate the cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
kubectl get all NAME READY STATUS RESTARTS AGE pod/barman-cloud-fbf54687f-r45v9 1/1 Running 1 (29m ago) 60m pod/cluster-example-backup-1 2/2 Running 1 (27m ago) 48m pod/cluster-restore-1 1/1 Running 0 48s pod/cnpg-controller-manager-6848689f4-ct2t6 1/1 Running 3 (29m ago) 61m pod/minio 1/1 Running 0 58m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/barman-cloud ClusterIP 10.105.172.20 <none> 9090/TCP 60m service/cluster-example-backup-r ClusterIP 10.96.246.116 <none> 5432/TCP 48m service/cluster-example-backup-ro ClusterIP 10.102.74.15 <none> 5432/TCP 48m service/cluster-example-backup-rw ClusterIP 10.102.96.110 <none> 5432/TCP 48m service/cluster-restore-r ClusterIP 10.109.128.167 <none> 5432/TCP 94s service/cluster-restore-ro ClusterIP 10.96.8.79 <none> 5432/TCP 94s service/cluster-restore-rw ClusterIP 10.97.103.166 <none> 5432/TCP 94s service/cnpg-webhook-service ClusterIP 10.111.147.237 <none> 443/TCP 61m service/minio NodePort 10.111.215.241 <none> 9000:30000/TCP 58m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/barman-cloud 1/1 1 1 60m deployment.apps/cnpg-controller-manager 1/1 1 1 61m NAME DESIRED CURRENT READY AGE replicaset.apps/barman-cloud-fbf54687f 1 1 1 60m replicaset.apps/cnpg-controller-manager-6848689f4 1 1 1 61m |
The cluster "cluster-restore" is up and running with its own services (rw, ro and r).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
kubectl exec -it pod/cluster-restore-1 -- /bin/bash Defaulted container "postgres" out of: postgres, bootstrap-controller (init) postgres@cluster-restore-1:/$ psql psql (17.5 (Debian 17.5-1.pgdg110+1)) Type "help" for help. postgres=# \l List of databases Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges -----------+----------+----------+-----------------+---------+-------+--------+-----------+----------------------- app | app | UTF8 | libc | C | C | | | cybertec | postgres | UTF8 | libc | C | C | | | postgres | postgres | UTF8 | libc | C | C | | | template0 | postgres | UTF8 | libc | C | C | | | =c/postgres + | | | | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | libc | C | C | | | =c/postgres + | | | | | | | | postgres=CTc/postgres (5 rows) |
Leave a Reply