NetBox is the source of truth for network and infrastructure documentation. If you run a homelab or a small datacenter and want to know what IP belongs where, what VLANs exist, and which rack holds which server — NetBox is the tool for the job. This post walks through deploying it on a k3s cluster using the official Helm chart, FluxCD for GitOps, and HashiCorp Vault via ExternalSecrets for secret management.

Prerequisites
- A working k3s cluster (this was tested on a Pi5/NUC mixed cluster)
- FluxCD bootstrapped and managing your cluster
- The
external-secrets-operatorinstalled and aClusterSecretStorepointing at Vault - Vault with a KV v2 secrets engine enabled
- Helm 3 (for inspection only — Flux handles the actual deployment)
Repository Layout
Everything lives under apps/base/netbox/:
apps/base/netbox/
├── helmrelease.yaml
├── helmrepo.yaml
├── kustomization.yaml
├── kustomizeconfig.yaml
├── namespace.yaml
├── secrets.yaml
└── values.yaml
Step 1: Namespace
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: netbox
Step 2: Helm Repository
The official NetBox chart lives at https://charts.netbox.oss.netboxlabs.com/ — not on Artifact Hub’s main index.
# helmrepo.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: netbox
namespace: netbox
spec:
url: https://charts.netbox.oss.netboxlabs.com/
interval: 24h
Step 3: Helm Release
The chart is referenced by version and values are injected via a ConfigMap that kustomize generates from values.yaml. The kustomizeconfig.yaml wires the generated ConfigMap name hash into the valuesFrom reference automatically.
# helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: netbox
namespace: netbox
spec:
interval: 1h
chart:
spec:
chart: netbox
version: "8.0.23"
sourceRef:
kind: HelmRepository
name: netbox
namespace: netbox
interval: 12h
valuesFrom:
- kind: ConfigMap
name: netbox-values
# kustomizeconfig.yaml
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: netbox
resources:
- namespace.yaml
- secrets.yaml
- helmrepo.yaml
- helmrelease.yaml
configMapGenerator:
- name: netbox-values
files:
- values.yaml=values.yaml
configurations:
- kustomizeconfig.yaml
Step 4: Vault Secrets
Store the following paths in Vault before anything else. The structure matters — the chart’s configuration script reads each secret by key name.
# Django secret key
vault kv put netbox/secretkey \
secretKey="$(openssl rand -hex 50)"
# Superuser account
vault kv put netbox/superuser \
username="admin" \
email="[email protected]" \
password="your-admin-password" \
api_token="$(openssl rand -hex 20)"
# PostgreSQL
vault kv put netbox/postgresql \
password="netbox-db-pass" \
postgresPassword="postgres-superuser-pass"
# Redis (empty password is fine for internal-only)
vault kv put netbox/redis \
password="redis-pass"
# Email/SMTP (empty string if not using)
vault kv put netbox/email \
password="email-pass"
Step 5: ExternalSecrets
This is where most people hit a wall. The chart projects multiple Kubernetes secrets into a single volume, and each secret must have exactly the right key names. The way to find them is:
helm template netbox netbox/netbox --version 8.0.23 \
| grep -A5 -B5 "superuser_password\|email_password\|db_password"
From that output, the chart maps secrets like this:
| Kubernetes Secret | Key | Mounted/Used as |
|---|---|---|
netbox-secret |
secret_key |
Django SECRET_KEY |
netbox-secret |
email_password |
SMTP password |
netbox-superuser-secret |
username |
env SUPERUSER_NAME |
netbox-superuser-secret |
email |
env SUPERUSER_EMAIL |
netbox-superuser-secret |
password |
/run/secrets/superuser_password |
netbox-superuser-secret |
api_token |
/run/secrets/superuser_api_token |
netbox-postgresql-secret |
password |
DB user password |
netbox-postgresql-secret |
postgres-password |
Bitnami PG superuser |
Note that email_password lives in netbox-secret, not in the superuser secret. This is counterintuitive and not documented clearly — it caught me out multiple times.
secrets.yaml---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: netbox-secret
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: netbox-secret
creationPolicy: Owner
data:
- secretKey: secret_key
remoteRef:
key: netbox/secretkey
property: secretKey
- secretKey: email_password
remoteRef:
key: netbox/email
property: password
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: netbox-superuser-secret
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: netbox-superuser-secret
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: netbox/superuser
property: username
- secretKey: email
remoteRef:
key: netbox/superuser
property: email
- secretKey: password
remoteRef:
key: netbox/superuser
property: password
- secretKey: api_token
remoteRef:
key: netbox/superuser
property: api_token
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: netbox-postgresql-secret
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: netbox-postgresql-secret
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: netbox/postgresql
property: password
- secretKey: postgres-password
remoteRef:
key: netbox/postgresql
property: postgresPassword
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: netbox-redis-secret
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: netbox-redis-secret
creationPolicy: Owner
data:
- secretKey: redis-password
remoteRef:
key: netbox/redis
property: password
Step 6: Values
A few k3s-specific decisions here:
persistence.enabled: false—local-pathisReadWriteOnceand doesn’t support media file persistence across replicas. Re-enable this if you add Longhorn or NFS.allowedHosts: ["*"]— fine for a homelab; restrict to your actual hostname in production.
values.yamlsuperuser:
existingSecret: netbox-superuser-secret
existingSecret: netbox-secret
existingSecretKey: secret_key
allowedHosts:
- "netbox.local"
- "netbox"
- "localhost"
- "*"
timeZone: Europe/Berlin
loginRequired: false
defaultLanguage: en-us
persistence:
enabled: false
postgresql:
enabled: true
auth:
username: netbox
database: netbox
existingSecret: netbox-postgresql-secret
secretKeys:
adminPasswordKey: postgres-password
userPasswordKey: password
primary:
persistence:
enabled: true
storageClass: "local-path"
size: 5Gi
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
redis:
enabled: true
auth:
existingSecret: netbox-redis-secret
existingSecretPasswordKey: redis-password
master:
persistence:
enabled: true
storageClass: "local-path"
size: 1Gi
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
worker:
resources:
requests:
cpu: 50m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
housekeeping:
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
replicaCount: 1
Step 7: Deploy
Commit everything and push:
git add apps/base/netbox/
git commit -m "feat(netbox): initial deployment"
git push
Watch Flux reconcile:
flux reconcile kustomization apps --with-source
kubectl get pods -n netbox -w
First boot takes 2–5 minutes. PostgreSQL initializes, then NetBox runs migrations, then the superuser is created. You’ll see the worker pod restart a few times while waiting for migrations — that’s normal.
Verifying the Deployment
kubectl get pods -n netbox
# All should be Running/Ready
kubectl get externalsecret -n netbox
# All should show READY=True
kubectl get helmrelease -n netbox
# Should show READY=True and the chart version
Accessing the UI (without Ingress)
Before you wire up your ingress or DNS, port-forward directly:
kubectl port-forward -n netbox svc/netbox 8000:80
Open http://localhost:8000 and log in with the credentials from netbox/superuser in Vault.
Note: If you get a Django
Bad Request (400), add"localhost"or"*"toallowedHostsinvalues.yaml. Django rejects requests from hosts not explicitly in the allowlist.