This post walks through deploying n8n on a local k3s cluster with production-grade practices: a CloudNativePG-managed PostgreSQL cluster, continuous backups to MinIO via Barman, GitOps with Flux, secrets from Vault via External Secrets Operator, and a hardened pod security posture.
Stack
- k3s — lightweight Kubernetes for bare-metal/homelab
- CloudNativePG (CNPG) — Postgres operator with built-in HA and backup
- Barman Cloud Plugin — WAL archiving and point-in-time recovery to S3/MinIO
- MinIO — self-hosted S3-compatible object storage
- Flux — GitOps continuous delivery
- External Secrets Operator + Vault — secret management
- Cloudflare Tunnel — secure ingress without exposing ports
1. Namespace
kubectl create namespace n8n
2. PostgreSQL Cluster with CloudNativePG
CNPG handles HA, replication, and automated failover. We use 2 instances (primary + async standby) spread across nodes.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: n8n-db
spec:
instances: 2
bootstrap:
initdb:
database: app
owner: app
secret:
name: n8n-db-creds # basic-auth secret: username + password
storage:
size: 1Gi
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: n8n-objectstore
CNPG automatically creates *-rw, *-ro, and *-r services for read-write, read-only, and round-robin access.
Credentials via External Secrets
Never store credentials in Git. Use External Secrets Operator to pull them from Vault:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: n8n-db-creds
spec:
refreshInterval: 15s
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: n8n-db-creds
template:
type: kubernetes.io/basic-auth
data:
- secretKey: username
remoteRef:
key: secret/n8n/db
property: username
- secretKey: password
remoteRef:
key: secret/n8n/db
property: password
⚠️ Gotcha: The credentials used in the CNPG
initdbbootstrap and the n8n app secret must reference the same Vault path. A mismatch causespassword authentication failedon startup.
3. Continuous Backups to MinIO
ObjectStore
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: n8n-objectstore
spec:
configuration:
destinationPath: s3://backups/n8n-db/ # ⚠️ use a cluster-specific subfolder
endpointURL: http://minio.example.net:9000
s3Credentials:
accessKeyId:
key: ACCESS_KEY_ID
name: minio-s3
secretAccessKey:
key: ACCESS_SECRET_KEY
name: minio-s3
wal:
compression: gzip
retentionPolicy: 3d
⚠️ Critical: Always include a cluster-specific subfolder in
destinationPath(e.g.s3://backups/n8n-db/). Using the bucket root causesbarman-cloud-check-wal-archiveto fail withExpected empty archiveonce any backup data exists, blocking all WAL archiving.
On-demand backup
kubectl cnpg backup n8n-db --method=plugin --plugin-name=barman-cloud.cloudnative-pg.io
Verify
kubectl cnpg status n8n-db
Look for Working WAL archiving: OK and WALs waiting to be archived: 0.
4. n8n Deployment
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: n8n-config
data:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: n8n-db-rw.n8n.svc.cluster.local
DB_POSTGRESDB_PORT: "5432"
DB_POSTGRESDB_DATABASE: app
N8N_PORT: "3008"
N8N_SECURE_COOKIE: "true"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: "true"
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-data
spec:
accessModes:
- ReadWriteOnce # ⚠️ local-path only supports RWO, not RWX
resources:
requests:
storage: 1Gi
⚠️ Gotcha: k3s’s default
local-pathStorageClass usesWaitForFirstConsumerbinding. If the PVC is created withReadWriteMany, it staysPendingindefinitely. UseReadWriteOncefor single-replica workloads.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
spec:
replicas: 1
selector:
matchLabels:
app: n8n
strategy:
type: Recreate # required with RWO PVC
template:
metadata:
labels:
app: n8n
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: n8n
image: docker.n8n.io/n8nio/n8n:1.123.3
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /healthz
port: 3008
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 3008
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
envFrom:
- configMapRef:
name: n8n-config
- secretRef:
name: n8n-env-container-db
ports:
- containerPort: 3008
volumeMounts:
- mountPath: /home/node/.n8n
name: n8n-data
- mountPath: /tmp # required with readOnlyRootFilesystem
name: tmp
- mountPath: /home/node/.cache
name: cache
volumes:
- name: n8n-data
persistentVolumeClaim:
claimName: n8n-data
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Security highlights:
runAsNonRoot+ non-root UID/GIDreadOnlyRootFilesystem: true— writable paths mounted asemptyDir- All Linux capabilities dropped
seccompProfile: RuntimeDefault- Pod Security Standards (PSS) restricted compliant
5. Ingress via Cloudflare Tunnel
Rather than exposing NodePorts or LoadBalancer IPs, use a Cloudflare Tunnel for zero-trust ingress. The cloudflared deployment runs as 2 replicas for redundancy and connects your n8n service to a Cloudflare-managed hostname — no inbound firewall rules required.
6. GitOps with Flux
All manifests live in Git. Flux watches the repo and reconciles state continuously. Secrets are never in Git — only ExternalSecret resources that reference Vault paths.
For immutable fields (like PVC accessModes), Flux cannot update them in-place. The workflow is:
# 1. Suspend reconciliation
flux suspend kustomization <name>
# 2. Delete the immutable resource
kubectl delete pvc n8n-data
# 3. Update YAML in Git, push
# 4. Resume — Flux recreates it correctly
flux resume kustomization <name>
7. Verify
vscode@8f5604632155 /workspaces/pi5cluster main 2m 10s
❯ k get pods
NAME READY STATUS RESTARTS AGE
cloudflared-589c8f977c-fs88r 1/1 Running 0 20m
cloudflared-589c8f977c-kt7s4 1/1 Running 0 20m
n8n-db-1 2/2 Running 0 27m
n8n-db-2 2/2 Running 0 27m
n8n-df6b4587d-dkvcx 1/1 Running 0 20m
vscode@8f5604632155 /workspaces/pi5cluster main
❯
vscode@8f5604632155 /workspaces/pi5cluster main
❯ k cnpg status n8n-db
Cluster Summary
Name n8n/n8n-db
System ID: 7611505246418595865
PostgreSQL Image: ghcr.io/cloudnative-pg/postgresql:18.0-system-trixie
Primary instance: n8n-db-1
Primary promotion time: 2026-02-27 11:28:01 +0000 UTC (27m47s)
Status: Cluster in healthy state
Instances: 2
Ready instances: 2
Size: 131M
Current Write LSN: 0/7000000 (Timeline: 1 - WAL File: 000000010000000000000007)
Continuous Backup status (Barman Cloud Plugin)
ObjectStore / Server name: n8n-objectstore/n8n-db
First Point of Recoverability: 2026-02-27 11:50:11 UTC
Last Successful Backup: 2026-02-27 11:50:11 UTC
Last Failed Backup: -
Working WAL archiving: OK
WALs waiting to be archived: 0
Last Archived WAL: 000000010000000000000006 @ 2026-02-27T11:50:12.067046Z
Last Failed WAL: 000000010000000000000001 @ 2026-02-27T11:48:57.432193Z
Streaming Replication status
Replication Slots Enabled
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- ----------------
n8n-db-2 0/7000000 0/7000000 0/7000000 0/7000000 00:00:00 00:00:00 00:00:00 streaming async 0 active
Instances status
Name Current LSN Replication role Status QoS Manager Version Node
---- ----------- ---------------- ------ --- --------------- ----
n8n-db-1 0/7000000 Primary OK BestEffort 1.27.1 nuc242
n8n-db-2 0/7000000 Standby (async) OK BestEffort 1.27.1 nuc243
Plugins status
Name Version Status Reported Operator Capabilities
---- ------- ------ ------------------------------
barman-cloud.cloudnative-pg.io 0.9.0 N/A Reconciler Hooks, Lifecycle Service
Lessons Learned
| Problem | Root Cause | Fix |
|---|---|---|
Pod stuck Pending |
PVC used ReadWriteMany with local-path |
Change to ReadWriteOnce |
n8n CrashLoopBackOff |
DB password mismatch between Vault paths | Align both secrets to the same Vault key |
WAL archiving Expected empty archive |
destinationPath pointed to bucket root |
Add cluster-specific subfolder to path |
Result
A fully operational n8n instance with:
- PostgreSQL HA (primary + standby, streaming replication)
- Continuous WAL archiving to MinIO (point-in-time recovery)
- Hardened pod security (PSS restricted)
- Secrets managed by Vault + External Secrets
- GitOps-driven deployments via Flux
- Zero-port-exposure ingress via Cloudflare Tunnel