Forgejo is a lightweight, self-hosted Git service — a community fork of Gitea. In this post I’ll walk through how I deployed it on my home k3s cluster backed by a CloudNativePG (CNPG) PostgreSQL database, MinIO S3-compatible object storage for backups, and exposed it via Cilium’s Gateway API with automatic TLS through cert-manager.
Architecture Overview
The setup involves three main layers:
- App layer — the Forgejo deployment, services, and ingress (Gateway + HTTPRoute)
- Database layer — a CloudNativePG PostgreSQL cluster with WAL archiving to MinIO
- Secrets layer — External Secrets Operator pulling credentials from Vault
Here’s the directory structure I’m using in my GitOps repo:
apps/
base/forgejo/ # Forgejo deployment, services, config
athena/forgejo/ # Cluster-specific overlays (gateway, TLS, routes)
databases/
base/forgejo/ # CNPG cluster, backup schedule, object store
Prerequisites
You’ll need the following operators/controllers running in your cluster:
- CloudNativePG for the Postgres cluster
- Barman Cloud plugin for CNPG for WAL archiving and backups
- External Secrets Operator for pulling secrets from Vault (or adapt this to your secret management of choice)
- cert-manager for TLS certificate provisioning
- Cilium as the CNI with Gateway API support enabled
- A MinIO instance (or any S3-compatible store) for database backups
Step 1: Namespace
Everything lives in the forgejo namespace. Create it first with appropriate pod security settings:
# databases/base/forgejo/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: forgejo
labels:
app.kubernetes.io/component: monitoring
pod-security.kubernetes.io/enforce: baseline
Step 2: Database Setup
Secrets
The CNPG cluster needs database credentials. I store these in Vault and pull them using External Secrets Operator. The secret type must be kubernetes.io/basic-auth for CNPG to recognize it.
# databases/base/forgejo/forgejo-db-credentials.yaml
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: forgejo-db-credentials
namespace: forgejo
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: forgejo-db-credentials
creationPolicy: Owner
template:
type: kubernetes.io/basic-auth
data:
- secretKey: username
remoteRef:
key: forgejo-secrets/forgejo-db-credentials
property: username
- secretKey: password
remoteRef:
key: forgejo-secrets/forgejo-db-credentials
property: password
MinIO Secret for Backups
# databases/base/forgejo/minio-secret.yaml
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: minio-s3
namespace: forgejo
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: minio-s3
creationPolicy: Owner
data:
- secretKey: ACCESS_KEY_ID
remoteRef:
key: minio-s3
property: ACCESS_KEY_ID
- secretKey: ACCESS_SECRET_KEY
remoteRef:
key: minio-s3
property: ACCESS_SECRET_KEY
Object Store for WAL Archiving
This tells CNPG where to send WAL archives and base backups:
# databases/base/forgejo/minio-s3-objectstore.yaml
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: forgejo-objectstore
namespace: forgejo
spec:
configuration:
destinationPath: "s3://backups/"
endpointURL: "http://atlas.uclab8.net:9000"
s3Credentials:
accessKeyId:
name: minio-s3
key: ACCESS_KEY_ID
secretAccessKey:
name: minio-s3
key: ACCESS_SECRET_KEY
wal:
compression: gzip
retentionPolicy: 7d
PostgreSQL Cluster
A two-instance CNPG cluster with WAL archiving via the barman-cloud plugin and a Pod Monitor for Prometheus scraping:
# databases/base/forgejo/db.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: forgejo-db
spec:
instances: 2
bootstrap:
initdb:
database: forgejodb
owner: forgejo
secret:
name: forgejo-db-credentials
storage:
size: 1Gi
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: forgejo-objectstore
monitoring:
enablePodMonitor: true
Tip for recovery: The commented-out
bootstrap.recoveryblock in the YAML is your escape hatch if you ever need to restore from a backup. Swap theinitdbblock for arecoveryblock pointing to your ObjectStore and you’re good to go.
Scheduled Backups
Take a full base backup every night at 3 AM:
# databases/base/forgejo/backup.yaml
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: forgejo-db-backup
spec:
schedule: "0 0 3 * * *"
backupOwnerReference: cluster
cluster:
name: forgejo-db
method: plugin
pluginConfiguration:
name: barman-cloud.cloudnative-pg.io
Database Kustomization
# databases/base/forgejo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- minio-secret.yaml
- minio-s3-objectstore.yaml
- forgejo-db-credentials.yaml
- db.yaml
- backup.yaml
Step 3: Forgejo App
ConfigMap
Forgejo is configured entirely through environment variables using the FORGEJO__section__KEY convention. This maps directly to sections in app.ini:
# apps/base/forgejo/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: forgejo-config
namespace: forgejo
data:
FORGEJO__actions__DEFAULT_ACTIONS_URL: https://data.forgejo.org
FORGEJO__actions__ENABLED: "true"
FORGEJO__attachment__PATH: /data/app/attachments
FORGEJO__database__DB_TYPE: postgres
FORGEJO__database__HOST: forgejo-db-rw:5432
FORGEJO__database__NAME: forgejodb
FORGEJO__database__SSL_MODE: disable
FORGEJO__lfs__PATH: /data/lfs
FORGEJO__log__ROOT_PATH: /data/app/log
FORGEJO__picture__AVATAR_UPLOAD_PATH: /data/app/avatars
FORGEJO__picture__REPOSITORY_AVATAR_UPLOAD_PATH: /data/app/repo-avatars
FORGEJO__repository__ROOT: /data/git
FORGEJO__security__INSTALL_LOCK: "true"
FORGEJO__server__APP_DATA_PATH: /data/app
FORGEJO__server__DOMAIN: forgejo.uclab.dev
FORGEJO__server__HTTP_PORT: "3000"
FORGEJO__server__PER_WRITE_PER_KB_TIMEOUT: "-1"
FORGEJO__server__PER_WRITE_TIMEOUT: "-1"
FORGEJO__server__ROOT_URL: https://forgejo.uclab.dev/
FORGEJO__server__SSH_DOMAIN: git.uclab.dev
FORGEJO__server__SSH_LISTEN_PORT: "2222"
FORGEJO__server__SSH_PORT: "2222"
FORGEJO__session__PROVIDER_CONFIG: /data/app/sessions
FORGEJO__migrations__ALLOW_LOCALNETWORKS: "true"
FORGEJO__migrations__ALLOWED_DOMAINS: "*.uclab8.net"
A few things worth calling out:
FORGEJO__database__HOST: forgejo-db-rw:5432— CNPG creates aforgejo-db-rwservice pointing to the primary instance automatically.FORGEJO__security__INSTALL_LOCK: "true"— skip the web installer on first boot.- The write timeout settings of
-1are useful if you’re pushing large repos over slow connections.
Database Secret (App Side)
The app needs the DB credentials as well, but in a different format (raw env vars rather than basic-auth):
# apps/base/forgejo/secret.yaml
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: forgejo-db-env
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: forgejo-db-env
creationPolicy: Owner
template:
type: Opaque
data:
- secretKey: FORGEJO__database__USER
remoteRef:
key: forgejo-secrets/forgejo-db-env
property: FORGEJO__database__USER
- secretKey: FORGEJO__database__PASSWD
remoteRef:
key: forgejo-secrets/forgejo-db-env
property: FORGEJO__database__PASSWD
Persistent Storage
Two PVCs — one for the Forgejo config volume, one for all data (repos, LFS, avatars, etc.):
# apps/base/forgejo/storage-config.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-config
namespace: forgejo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# apps/base/forgejo/storage-data.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-data
namespace: forgejo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Adjust storage sizes to taste — 4Gi for data is a reasonable starting point for a small team, but bump it up if you’re storing large LFS assets.
Deployment
The deployment uses the rootless image variant, which means no root privileges inside the container. An init container handles the chown to get filesystem permissions right before the main container starts.
# apps/base/forgejo/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: forgejo
app.kubernetes.io/name: forgejo
name: forgejo
namespace: forgejo
spec:
replicas: 1
selector:
matchLabels:
app: forgejo
strategy:
type: Recreate
template:
metadata:
labels:
app: forgejo
app.kubernetes.io/component: server
app.kubernetes.io/instance: forgejo
app.kubernetes.io/name: forgejo
policy-type: app
spec:
initContainers:
- name: fix-permissions
image: busybox:1.37
command:
- sh
- -c
- |
chown -R 1000:1000 /etc/gitea
chown -R 1000:1000 /data
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /etc/gitea
name: forgejo-config
- mountPath: /data
name: forgejo-data
containers:
- name: forgejo
image: data.forgejo.org/forgejo/forgejo:13.0.3-rootless
env:
- name: USER_UID
value: "1000"
- name: USER_GID
value: "1000"
envFrom:
- configMapRef:
name: forgejo-config
- secretRef:
name: forgejo-db-env
ports:
- containerPort: 3000
name: http
- containerPort: 2222
name: ssh
livenessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 3
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 5
resources:
limits:
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /etc/gitea
name: forgejo-config
- mountPath: /data
name: forgejo-data
restartPolicy: Always
securityContext:
fsGroup: 1000
volumes:
- name: forgejo-config
persistentVolumeClaim:
claimName: forgejo-config
- name: forgejo-data
persistentVolumeClaim:
claimName: forgejo-data
Using strategy: Recreate is important here since both PVCs are ReadWriteOnce — you can’t have two pods mounting them simultaneously.
Services
An internal ClusterIP service for HTTP, and a LoadBalancer for SSH (so git clone over SSH works without going through the HTTP gateway):
# apps/base/forgejo/service-http.yaml
apiVersion: v1
kind: Service
metadata:
name: forgejo-server
namespace: forgejo
spec:
type: ClusterIP
selector:
app: forgejo
ports:
- name: http
port: 3000
# apps/base/forgejo/service-ssh.yaml
apiVersion: v1
kind: Service
metadata:
name: forgejo-ssh
namespace: forgejo
spec:
type: LoadBalancer
selector:
app: forgejo
ports:
- name: ssh
port: 2222
targetPort: 2222
Base Kustomization
# apps/base/forgejo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- secret.yaml
- configmap.yaml
- storage-config.yaml
- storage-data.yaml
- deployment.yaml
- service-http.yaml
- service-ssh.yaml
Step 4: Ingress with Gateway API and TLS
I’m using Cilium’s Gateway API implementation rather than a traditional Ingress controller. First, provision a TLS certificate with cert-manager:
# apps/athena/forgejo/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: forgejo-uclab-tls
namespace: forgejo
spec:
secretName: forgejo-uclab-tls
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
dnsNames:
- forgejo.uclab.dev
Then create a Gateway that terminates TLS:
# apps/athena/forgejo/gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: forgejo
namespace: forgejo
annotations:
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
gatewayClassName: cilium
listeners:
- hostname: forgejo.uclab.dev
name: forgejo-uclab-dev-http
port: 80
protocol: HTTP
- hostname: forgejo.uclab.dev
name: forgejo-uclab-dev-https
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: forgejo-uclab-tls
allowedRoutes:
namespaces:
from: All
And an HTTPRoute to forward traffic to the Forgejo service:
# apps/athena/forgejo/httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: forgejo
namespace: forgejo
spec:
hostnames:
- forgejo.uclab.dev
parentRefs:
- name: forgejo
rules:
- backendRefs:
- name: forgejo-server
port: 3000
matches:
- path:
type: PathPrefix
value: /
Overlay Kustomization
The cluster-specific overlay (I call mine athena) pulls in the base and adds the gateway resources:
# apps/athena/forgejo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: forgejo
resources:
- ../../base/forgejo/
- gateway.yaml
- httproute.yaml
- certificate.yaml
Deployment Order
When applying this for the first time, order matters:
- Apply the database layer first (
databases/base/forgejo/) and wait for the CNPG cluster to be ready and theforgejo-db-rwservice to exist. - Apply the app layer (
apps/athena/forgejo/).
With FluxCD or ArgoCD you can express this as a dependency, but if you’re applying manually just give the DB a minute or two to initialize.
kubectl apply -k databases/base/forgejo/
kubectl wait --for=condition=Ready cluster/forgejo-db -n forgejo --timeout=120s
kubectl apply -k apps/athena/forgejo/
Verifying the Deployment
# Check that all pods are running
kubectl get pods -n forgejo
# Check the CNPG cluster status
kubectl get cluster forgejo-db -n forgejo
# Tail Forgejo logs
kubectl logs -n forgejo -l app=forgejo -f
# Check that a backup ran
kubectl get backup -n forgejo
Once everything is up, navigate to https://forgejo.uclab.dev and you should land directly on the Forgejo dashboard — no installer, since we set INSTALL_LOCK=true.
Restoring from Backup
If you ever need to restore, swap the bootstrap.initdb block in db.yaml for a recovery block:
bootstrap:
recovery:
database: forgejodb
owner: forgejo
source: source
secret:
name: forgejo-db-credentials
externalClusters:
- name: source
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: forgejo-objectstore
serverName: forgejo-db
CNPG will replay WAL archives from MinIO up to the latest available point, giving you near-zero RPO recovery.
Wrapping Up
This setup gives you a production-grade self-hosted Git service with:
- Automatic PostgreSQL failover via CNPG’s two-instance cluster
- Continuous WAL archiving and nightly base backups to S3
- Automatic TLS certificate rotation via cert-manager
- Proper rootless container security with a hardened securityContext
- GitOps-friendly Kustomize overlays for managing multiple clusters
The main moving parts to swap out if you’re adapting this to your own environment are the secret store (Vault + ESO), the S3 backend (MinIO), and the ingress controller (Cilium Gateway API). Everything else is pretty standard Kubernetes.