We have explored how to create cluster, take backups, connect to the cluster and run psql commands in our CNPG series. However, one might feel overwhelmed because of those day - to - day operations. That is why CNPG provides a kubectl plugin. CloudNativePG' s plugin enriches kubectl with a set of PostgreSQL - focused commands, making easier to inspect clusters, trigger backups, promote a new instance, run pgbench and run psql commands without leaving existing terminal. Even though it is a pretty simple and straightforward topic, but it is important for completeness of our CNPG series.
Table of Contents
There are different ways to install the plugin, but I found installing using the script easiest for me:
1 2 3 4 5 6 |
curl -sSfL \ https://github.com/cloudnative-pg/cloudnative-pg/raw/main/hack/install-cnpg-plugin.sh | \ sudo sh -s -- -b /usr/local/bin cloudnative-pg/cloudnative-pg info checking GitHub for latest tag cloudnative-pg/cloudnative-pg info found version: 1.27.0 for v1.27.0/linux/x86_64 cloudnative-pg/cloudnative-pg info installed /usr/local/bin/kubectl-cnpg |
The pluging provides a variety of commands. "--help" is useful to get help for exploring available commands. For example;
1 |
kubectl cnpg --help |
If a help is needed for a specific command then;
1 |
kubectl cnpg promote --help |
This command is used to generate a yaml manifest that is used for the installation of the operator. In this way, we can modify the default settings of the operator such as # replica and installation namespace.
1 |
kubectl cnpg install generate -n cnpg-system --replicas 3 --watch-namespace "$my_namespace_2_watch" > install_operator.yaml |
Status command show us the current status of respective cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
kubectl cnpg status cluster-example-backup Cluster Summary Name cnpg-system/cluster-example-backup System ID: 7545128324982542354 PostgreSQL Image: ghcr.io/cloudnative-pg/postgresql:17.5 Primary instance: cluster-example-backup-1 Primary start time: 2025-09-01 14:31:40 +0000 UTC (uptime 10m22s) Status: Cluster in healthy state Instances: 3 Ready instances: 3 Size: 174M Current Write LSN: 0/9000100 (Timeline: 1 - WAL File: 000000010000000000000009) Continuous Backup status Not configured Streaming Replication status Replication Slots Enabled Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ---------------- cluster-example-backup-2 0/9000100 0/9000100 0/9000100 0/9000100 00:00:00 00:00:00 00:00:00 streaming async 0 active cluster-example-backup-3 0/9000100 0/9000100 0/9000100 0/9000100 00:00:00 00:00:00 00:00:00 streaming async 0 active Instances status Name Current LSN Replication role Status QoS Manager Version Node ---- ----------- ---------------- ------ --- --------------- ---- cluster-example-backup-1 0/9000100 Primary OK BestEffort 1.27.0 minikube cluster-example-backup-2 0/9000100 Standby (async) OK BestEffort 1.27.0 minikube cluster-example-backup-3 0/9000100 Standby (async) OK BestEffort 1.27.0 minikube Plugins status Name Version Status Reported Operator Capabilities ---- ------- ------ ------------------------------ barman-cloud.cloudnative-pg.io 0.6.0 N/A Reconciler Hooks, Lifecycle Service |
In this ouput we see about general information about the cluster, backup status, current WAL and replication status. Using verbose flag, "-v" or "--verbose" we can also display tablescpaces, managed roles and unmanaged replication slots.
If you suddenly need to take a backup of your CNPG cluster, the backup
command provides a quick and straightforward way to do so:
1 2 |
kubectl cnpg backup cluster-example-backup backup/cluster-example-backup-20250901172658 created |
Let' s check the backup status:
1 |
kubectl get backupNAME AGE CLUSTER METHOD PHASE ERRORcluster-example-backup-20250901172658 26s cluster-example-backup barmanObjectStore failed cannot proceed with the backup as the cluster has no backup sectionpg-backup-example 36m cluster-example-backup plugin completed |
Opsie, we have an error! Once we check the details of the backup we know how to deal with it:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
kubectl describe backup cluster-example-backup-20250901172658 . . . Status: Error: cannot proceed with the backup as the cluster has no backup section Method: barmanObjectStore Phase: failed Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ClusterHasBackupConfigured 2m39s cloudnative-pg-backup cannot proceed with the backup as the cluster has no backup section |
The default backup method of CNPG plugin is "barmanObjectStore". However, our backup method is plugin as we explained in our previous post. So, we need to specify the method we are going to use:
1 2 3 4 5 6 7 8 |
kubectl cnpg backup cluster-example-backup -m plugin --plugin-name barman-cloud.cloudnative-pg.io backup/cluster-example-backup-20250901173529 created kubectl get backup NAME AGE CLUSTER METHOD PHASE ERROR cluster-example-backup-20250901172658 8m47s cluster-example-backup barmanObjectStore failed cannot proceed with the backup as the cluster has no backup section cluster-example-backup-20250901173529 16s cluster-example-backup plugin completed pg-backup-example 45m cluster-example-backup plugin completed |
This command provides a shortcut: instead of manually entering a pod and running psql, you can open a psql terminal directly:
1 2 3 |
kubectl cnpg psql cluster-example-backup psql (17.5 (Debian 17.5-1.pgdg110+1)) Type "help" for help. |
It also accepts the same arguments as psql:
1 2 3 4 5 6 |
kubectl cnpg psql --help This command will start an interactive psql session inside a PostgreSQL Pod created by CloudNativePG. Usage: kubectl cnpg psql CLUSTER [-- PSQL_ARGS...] [flags] |
Yes, you see correct. You can run a pgadmin4 pod to manage and query your database if you prefer a GUI over the terminal:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
kubectl cnpg pgadmin4 cluster-example-backup ConfigMap/cluster-example-backup-pgadmin4 created Deployment/cluster-example-backup-pgadmin4 created Service/cluster-example-backup-pgadmin4 created Secret/cluster-example-backup-pgadmin4 created To access this pgAdmin instance, use the following credentials: username: user@pgadmin.com password: xxxxxxxxxxxxxxxx To establish a connection to the database server, you'll need the password for the 'app' user. Retrieve it with the following command: kubectl get secret cluster-example-backup-app -o 'jsonpath={.data.password}' | base64 -d; echo "" Easily reach the new pgAdmin4 instance by forwarding your local 8080 port using: kubectl rollout status deployment cluster-example-backup-pgadmin4 Then, navigate to http://localhost:8080 in your browser. To remove this pgAdmin deployment, execute: kubectl cnpg pgadmin4 cluster-example-backup --dry-run | kubectl delete -f - |
As it can be seen, the plugin creates a config map, secret, pod and service for pgadmin4. All we need to is run port forwarding command and then open "localhost:8080" in your favourite browser:
As we can see, the plugin creates a ConfigMap, Secret, Pod, and Service for pgAdmin4. All we need to do next is run a port-forwarding command and then open localhost:8080
in the browser:
1 |
kubectl port-forward deployment/cluster-example-backup-pgadmin4 8080:80 |
In order to perform a switchover, use the promote command :
1 2 3 |
kubectl cnpg promote cluster-example-backup cluster-example-backup-2 {"level":"info","ts":"2025-09-01T22:33:30.620145121+02:00","msg":"Cluster has become unhealthy"} Node cluster-example-backup-2 in cluster cluster-example-backup will be promoted |
As shown in the status command section the primary pod was the first one, let' s check whether the promotion is sucessful:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
kubectl cnpg status cluster-example-backup Cluster Summary Name cnpg-system/cluster-example-backup System ID: 7545133281391910931 PostgreSQL Image: ghcr.io/cloudnative-pg/postgresql:17.5 Primary instance: cluster-example-backup-2 Primary start time: 2025-09-01 20:33:43 +0000 UTC (uptime 27s) Status: Cluster in healthy state Instances: 3 Ready instances: 3 Size: 191M Current Write LSN: 0/B0066A8 (Timeline: 2 - WAL File: 00000002000000000000000B) Continuous Backup status Not configured Streaming Replication status Replication Slots Enabled Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ---------------- cluster-example-backup-1 0/B0066A8 0/B0066A8 0/B0066A8 0/B0066A8 00:00:00 00:00:00 00:00:00 streaming async 0 active cluster-example-backup-3 0/B0066A8 0/B0066A8 0/B0066A8 0/B0066A8 00:00:00 00:00:00 00:00:00 streaming async 0 active Instances status Name Current LSN Replication role Status QoS Manager Version Node ---- ----------- ---------------- ------ --- --------------- ---- cluster-example-backup-2 0/B0066A8 Primary OK BestEffort 1.27.0 minikube cluster-example-backup-1 0/B0066A8 Standby (async) OK BestEffort 1.27.0 minikube cluster-example-backup-3 0/B0066A8 Standby (async) OK BestEffort 1.27.0 minikube Plugins status Name Version Status Reported Operator Capabilities ---- ------- ------ ------------------------------ barman-cloud.cloudnative-pg.io 0.6.0 N/A Reconciler Hooks, Lifecycle Service |
The logs of the pod also confirms the switchover:
1 2 3 4 5 6 7 |
kubectl logs cluster-example-backup-2 -n cnpg-system {"level":"info","ts":"2025-09-01T20:33:30.646566287Z","msg":"Setting myself as primary","logger":"instance-manager","logging_pod":"cluster-example-backup-2","controller":"instance-cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"cluster-example-backup","namespace":"cnpg-system"},"namespace":"cnpg-system","name":"cluster-example-backup","reconcileID":"109d24eb-0e4b-490a-bd4e-72428697f0b8","logging_pod":"cluster-example-backup-2","phase":"Switchover in progress","currentTimestamp":"2025-09-01T20:33:30.646208Z","targetPrimaryTimestamp":"2025-09-01T22:33:30.600215+02:00","currentPrimaryTimestamp":"2025-09-01T14:50:54.530881Z","msPassedSinceTargetPrimaryTimestamp":45,"msPassedSinceCurrentPrimaryTimestamp":20556115,"msDifferenceBetweenCurrentAndTargetPrimary":-20556069} . . . {"level":"info","ts":"2025-09-01T20:33:42.803117663Z","logger":"pg_ctl","msg":"waiting for server to promote.............. done","pipe":"stdout","logging_pod":"cluster-example-backup-2"} {"level":"info","ts":"2025-09-01T20:33:42.803139437Z","logger":"pg_ctl","msg":"server promoted","pipe":"stdout","logging_pod":"cluster-example-backup-2"} |
This command is used to create or remote a logical replication publication. A key point to remember is that when creating a publication on a remote cluster, an external cluster must be defined in the cluster specification. For example;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example-backup spec: instances: 3 imagePullPolicy: Always plugins: - name: barman-cloud.cloudnative-pg.io isWALArchiver: true parameters: barmanObjectName: minio-store storage: storageClass: csi-hostpath-sc size: 1Gi externalClusters: - name: source-cluster connectionParameters: host: xxx.xxx.xxx.xxx user: test password: name: source-db-test-user key: password |
Because I define source - cluster in my cluster-example-backup, now I could create a publication on the source - cluster:
1 2 |
kubectl cnpg publication create cluster-example-backup --external-cluster=source-cluster --publication=test --all-tables --dbname postgres CREATE PUBLICATION "test" FOR ALL TABLES |
However, if we want to create a publication on local cluster in Kubernetes then all we need to is remove --external-cluster flag:
1 2 |
kubectl cnpg publication create cluster-example-backup --publication=users --table=users --dbname app CREATE PUBLICATION "users" FOR TABLE users |
Dropping the publication is also simple as it seems:
1 2 |
kubectl cnpg publication drop cluster-example-backup --publication=users --dbname app DROP PUBLICATION "users" |
Like in the publication, subscripton is used to create or remove a logical replication subscripton.
1 2 3 |
kubectl cnpg subscription create cluster-example-backup --external-cluster=source-cluster --publication=test --dbname postgres --subscription=test --publication-dbname postgres CREATE SUBSCRIPTION "test" CONNECTION 'dbname=''postgres'' host=''xxx.xxx.xxx.xxx'' passfile=''/controller/external/source-cluster/pgpass'' user=''test''' PUBLICATION test NOTICE: created replication slot "test" on publisher |
With this command we created a subscription on local cluster-example-backup cluster using a publication created on our remote cluster which is source-cluster.
To drop the subscription:
1 2 3 |
kubectl cnpg subscription drop cluster-example-backup --dbname postgres --subscription=test DROP SUBSCRIPTION "test" NOTICE: dropped replication slot "test" on publisher |
The sync-sequences
command simplifies database migrations. If new rows have been inserted into the source database since the last schema export/import, sequence synchronization becomes necessary. Note that this command depends on an existing subscription.
1 2 3 |
kubectl cnpg subscription sync-sequences cluster-example-backup --dbname postgres --subscription=test SELECT pg_catalog.setval('"public"."my_sequence"', 1003); 1003 |
The CloudNativePG kubectl plugin simplifies day-to-day PostgreSQL cluster operations by providing a unified command set in Kubernetes. Instead of switching contexts or writing complex manifests, you can quickly check cluster health, perform backups, manage switchover operations, access psql, deploy pgAdmin4, and configure logical replication with just a few commands. While each task could be performed manually, the plugin brings consistency, convenience, and reduced operational overhead.
Leave a Reply