One of the things I wanted to get right in my Pi5 cluster was ensuring that only images I’ve actually built and signed can run — no surprises, no unsigned images sneaking into my workloads. This post walks through how I set up Sigstore’s Policy Controller with Flux to enforce Cosign image signatures in my uclab namespace.
The Goal
Every image I deploy to the uclab namespace is built in my own CI pipeline, pushed to my self-hosted Forgejo registry, and signed with Cosign using a key pair I control. The Policy Controller’s job is to sit as an admission webhook and reject any pod that tries to run an image without a valid signature.
Installing the Policy Controller with Flux
I manage everything in the cluster via FluxCD, so the Policy Controller goes in as a HelmRelease. The setup lives under
infrastructure/controllers/base/policy-controller/ with four files:
.
├── kustomization.yaml
├── namespace.yaml
├── release.yaml
└── repository.yaml
First, the Helm repository pointing at Sigstore’s chart index:
# repository.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: sigstore
namespace: cosign-system
spec:
interval: 24h
url: https://sigstore.github.io/helm-charts
Then the release itself. The interesting bit here is that the webhook needs credentials to pull from my private Forgejo registry in order to fetch signatures — so I mount a dockerconfigjson secret into the webhook pod:
# release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: policy-controller
namespace: cosign-system
spec:
interval: 30m
chart:
spec:
chart: policy-controller
version: "0.10.6"
sourceRef:
kind: HelmRepository
name: sigstore
namespace: cosign-system
interval: 12h
values:
webhook:
env:
DOCKER_CONFIG: /var/registry-auth
volumeMounts:
- name: registry-auth
mountPath: /var/registry-auth
readOnly: true
volumes:
- name: registry-auth
secret:
secretName: forgejo-registry
items:
- key: .dockerconfigjson
path: config.json
The forgejo-registry secret is a standard kubernetes.io/dockerconfigjson secret.
Without this, the webhook can’t reach the OCI registry to verify that a signature exists alongside the image.
Defining the Image Policy
The actual enforcement rule lives in
infrastructure/configs/base/policy-controller/uclab-image-policy.yaml as a ClusterImagePolicy:
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: uclab-image-policy
spec:
images:
- glob: "forgejo.uclab.dev/affragak/uclab**"
authorities:
- name: uclab
key:
secretRef:
name: cosign-pub-key
namespace: cosign-system
This tells the Policy Controller: for any image matching
forgejo.uclab.dev/affragak/uclab**, require a valid Cosign signature that verifies against the public key stored in the cosign-pub-key secret. The double ** glob covers both the image name and any tag or digest suffix.
The public key in cosign-pub-key is the counterpart to the private key used in my CI pipeline to sign images after they’re built and pushed.
Opting the Namespace In
Policy Controller uses an opt-in model — namespaces have to explicitly declare that they want signature enforcement. This is done with a label:
# blog/base/uclab/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: uclab
labels:
pod-security.kubernetes.io/enforce: restricted
policy.sigstore.dev/include: "true"
The policy.sigstore.dev/include: "true" label is what activates the webhook for this namespace. I also have pod-security.kubernetes.io/enforce: restricted set here, so the namespace gets both Kubernetes Pod Security Standards enforcement and image signature verification — a nice layered approach.
Seeing It Work
After deploying everything and triggering a rollout of the uclab deployment, the webhook logs confirm it’s actively processing admission requests for the namespace:
{"level":"info","msg":"Kind: \"/v1, Kind=Pod\" PatchBytes: null",
"knative.dev/namespace":"uclab",
"knative.dev/name":"uclab-6958bdf8c5-hxszc",
"knative.dev/operation":"UPDATE",
"admissionreview/allowed":true}
And the pod came up clean:
NAME READY STATUS RESTARTS AGE
uclab-77888bfff5-6pptn 1/1 Running 0 27s
If I were to try deploying an unsigned image — or one signed with a different key — the webhook would reject it at admission time before the pod ever schedules.
Summary
The setup is fairly straightforward once the pieces click together:
- Policy Controller runs as an admission webhook, deployed via Flux HelmRelease
- Registry credentials are mounted into the webhook so it can reach the private Forgejo registry to fetch signatures
ClusterImagePolicydefines which image globs to enforce and which public key to verify against- Namespace label opts the
uclabnamespace into enforcement
The result is that my CI pipeline is now the only path to a running pod in uclab — if it wasn’t built, signed, and pushed through my own registry, it doesn’t run.
For a home lab this might feel like overkill, but it’s good practice and honestly a fun rabbit hole to go down.