
Managing a homelab Kubernetes cluster with Flux GitOps means I rarely kubectl apply raw YAML anymore.
Everything lives in Git and Flux reconciles it.
But there’s still one pair of commands I reach for constantly when developing or debugging an app:
k kustomize # inspect the rendered output
k apply -k . # apply it manually when I need to
This post walks through what these commands actually do, how they fit into a Flux-based workflow, and a real example from my n8n deployment on a Raspberry Pi 5 k3s cluster.
What is Kustomize?
Kustomize is a template-free way to customize Kubernetes manifests.
Instead of Helm’s values.yaml and {{ .Values.foo }} placeholders, Kustomize works purely with overlays and patches on top of plain YAML.
It is baked directly into kubectl since version 1.14, so there is nothing extra to install.
The entry point is always a kustomization.yaml file that declares which resources belong to a “layer” and what transformations should be applied.
The kustomization.yaml
A minimal kustomization.yaml just lists the files that make up the app:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
- deployment.yaml
- secrets.yaml
- service.yaml
- storage.yaml
More advanced features you can add later:
patches:– strategic merge or JSON6902 patches per environmentimages:– override the image tag without touching the deployment YAMLnamePrefix:/nameSuffix:– stamp every resource namecommonLabels:– inject labels everywhereconfigMapGenerator:/secretGenerator:– generate ConfigMaps and Secrets from files or literals
My n8n App Structure
Here is the base layer for n8n inside my cluster repo:
apps/base/n8n/
├── configmap.yaml
├── deployment.yaml
├── kustomization.yaml
├── secrets.yaml # ExternalSecret, not a plain Secret
├── service.yaml
└── storage.yaml
Each file is a single-resource YAML.
No templating, no {{ }}, just plain Kubernetes objects.
The kustomization.yaml ties them together.
kubectl kustomize – Render Without Applying
k kustomize .
# or from a parent directory:
k kustomize apps/base/n8n
This command renders the final merged YAML to stdout without touching the cluster. It is the single most useful debugging tool in a Kustomize workflow.
Running it on my n8n base layer produces the complete, ready-to-apply manifest:
manifestsapiVersion: v1
data:
DB_POSTGRESDB_DATABASE: app
DB_POSTGRESDB_HOST: n8n-db-v2-rw.n8n.svc.cluster.local
DB_POSTGRESDB_PORT: "5432"
DB_TYPE: postgresdb
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: "true"
N8N_PORT: "3008"
N8N_SECURE_COOKIE: "true"
kind: ConfigMap
metadata:
name: n8n-config
---
apiVersion: v1
kind: Service
metadata:
name: n8n-service
spec:
ports:
- port: 3008
selector:
app: n8n
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
spec:
replicas: 1
selector:
matchLabels:
app: n8n
strategy:
type: Recreate
template:
metadata:
labels:
app: n8n
spec:
containers:
- envFrom:
- configMapRef:
name: n8n-config
- secretRef:
name: n8n-env-container-db
image: docker.n8n.io/n8nio/n8n:2.11.1
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3008
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
name: n8n
ports:
- containerPort: 3008
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 3008
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /home/node/.n8n
name: n8n-data
- mountPath: /tmp
name: tmp
- mountPath: /home/node/.cache
name: cache
restartPolicy: Always
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
volumes:
- name: n8n-data
persistentVolumeClaim:
claimName: n8n-data
- emptyDir: {}
name: tmp
- emptyDir: {}
name: cache
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: n8n-env-container-db
spec:
data:
- remoteRef:
key: n8n-env-container-db
property: username
secretKey: DB_POSTGRESDB_USER
- remoteRef:
key: n8n-env-container-db
property: password
secretKey: DB_POSTGRESDB_PASSWORD
refreshInterval: 15s
secretStoreRef:
kind: ClusterSecretStore
name: vault-backend-global
target:
creationPolicy: Owner
name: n8n-env-container-db
A few things worth noting in this output:
- The
ExternalSecretreferences aClusterSecretStorebacked by Vault — secrets never live in Git as plain text. - The Deployment uses
readOnlyRootFilesystem: truewith explicitemptyDirvolumes for/tmpand~/.cache, a pattern that keeps the pod hardened without fighting n8n’s runtime needs. strategy: Recreateis appropriate here because n8n with a local filesystem mount can’t safely run two pods at once.
kubectl apply -k – Apply the Rendered Output
k apply -k .
This is shorthand for:
k kustomize . | kubectl apply -f -
It renders the Kustomize layer and applies every resource to the cluster in one step. Useful when:
- You are bootstrapping a new namespace before Flux has reconciled
- You need to force-apply after a manual change to test something quickly
- Flux is suspended (
flux suspend kustomization n8n) and you want to iterate locally
Where This Fits in a Flux GitOps Workflow
In a normal Flux workflow you never run apply -k in production — Flux does that for you on every Git push.
But during development the loop looks like this:
edit YAML → k kustomize . → looks good? → git push → Flux reconciles
↓
something wrong? → fix → repeat
k kustomize is essentially a local dry-run before you commit.
It catches:
- Missing resource references (a
ConfigMapnamed inenvFromthat doesn’t exist yet) - Patch targets that don’t match any resource
- YAML syntax errors that would fail silently inside a
---separator
And k apply -k . is the fast path for “I just want this running right now on my local k3s, I’ll clean it up before pushing”.
Flux Kustomization vs. kustomize.config.k8s.io
One thing that trips people up: Flux has its own Kustomization CRD (kustomize.toolkit.fluxcd.io/v1) which is different from the kustomization.yaml file consumed by the Kustomize CLI.
| Thing | API / File |
|---|---|
| Kustomize layer definition | kustomization.yaml (file on disk) |
| Flux reconciliation object | Kustomization CRD from kustomize.toolkit.fluxcd.io |
The Flux Kustomization resource tells Flux where in the Git repo to find a kustomization.yaml and how to reconcile it:
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: n8n
namespace: flux-system
spec:
interval: 10m
path: ./apps/base/n8n
prune: true
sourceRef:
kind: GitRepository
name: pi5cluster
targetNamespace: n8n
Under the hood, Flux runs exactly the same kustomize build that k kustomize does.
That is why k kustomize . is a reliable local preview of what Flux will apply.
Quick Reference
| Command | What it does |
|---|---|
k kustomize . |
Render current directory’s kustomization to stdout |
k kustomize ./path/to/overlay |
Render a specific overlay |
k apply -k . |
Render and apply to the cluster |
k apply -k . --dry-run=client |
Render and validate without applying |
k apply -k . --dry-run=server |
Server-side dry-run (uses admission webhooks) |
k diff -k . |
Show diff between rendered output and live cluster state |
k diff -k . is particularly useful before a push — it shows exactly what will change in the cluster, not just what changed in Git.
Summary
Kustomize’s strength is its simplicity: plain YAML in, plain YAML out, with a deterministic merge strategy that is easy to reason about and trivial to inspect.
In a Flux GitOps setup, k kustomize and k apply -k are the two commands that bridge local development and the automated reconciliation loop.
They give you a fast feedback cycle without needing to push to Git for every small change, and they make the “what will Flux actually apply?” question answerable in one command.
If you are running k3s at home and not already using Kustomize overlays, it is worth the 20 minutes to restructure even a single app into a base/overlay layout — the payoff in clarity and reusability is immediate.