I wanted to add visitor analytics to this blog without relying on Google Analytics or any third-party cloud service. Umami is a great fit — it’s open source, privacy-friendly, GDPR-compliant, and self-hostable. This post covers how I deployed it on my k3s cluster using Cilium Gateway API and Cloudflare Tunnels, and the one gotcha that had me debugging for longer than I’d like to admit.
Stack
- k3s — lightweight Kubernetes running on a Raspberry Pi 5 cluster
- CloudNative PG (CNPG) — Postgres operator for the Umami database
- Cloudflare Tunnels — for exposing services to the internet without opening ports
- Umami 3.0.3 — the analytics server itself
Database
I use CloudNative PG to manage the Postgres cluster. Two instances for HA, with WAL archiving to an object store via the barman-cloud plugin.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: umami-db
spec:
instances: 2
bootstrap:
initdb:
database: umami
owner: umami
secret:
name: umami-db-creds
storage:
size: 5Gi
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: umami-objectstore
monitoring:
enablePodMonitor: true
CNPG automatically creates three services: umami-db-rw (read-write, primary), umami-db-ro (read-only replicas), and umami-db-r (round-robin). Umami connects to the primary via umami-db-rw.umami.svc.cluster.local.
Umami Deployment
The ConfigMap holds non-sensitive config. Note the two env vars at the bottom — TRACKER_SCRIPT_NAME and COLLECT_API_ENDPOINT — more on why these are critical in a moment.
apiVersion: v1
kind: ConfigMap
metadata:
name: umami-config
data:
DB_TYPE: "postgresdb"
DB_HOST: "umami-db-rw.umami.svc.cluster.local"
DB_PORT: "5432"
DB_DATABASE: "umami"
TRACKER_SCRIPT_NAME: "x.js"
COLLECT_API_ENDPOINT: "/api/x"
The Deployment references both the ConfigMap and a Secret for database credentials. One neat trick: Kubernetes lets you compose environment variables from previously defined ones in the same pod spec, so DATABASE_URL is built dynamically from the individual DB_* vars:
deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: umami
spec:
replicas: 1
selector:
matchLabels:
app: umami
template:
metadata:
labels:
app: umami
spec:
containers:
- name: umami
image: ghcr.io/umami-software/umami:3.0.3
ports:
- containerPort: 3000
env:
- name: DB_TYPE
valueFrom:
configMapKeyRef:
name: umami-config
key: DB_TYPE
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: umami-config
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: umami-config
key: DB_PORT
- name: DB_DATABASE
valueFrom:
configMapKeyRef:
name: umami-config
key: DB_DATABASE
- name: TRACKER_SCRIPT_NAME
valueFrom:
configMapKeyRef:
name: umami-config
key: TRACKER_SCRIPT_NAME
- name: COLLECT_API_ENDPOINT
valueFrom:
configMapKeyRef:
name: umami-config
key: COLLECT_API_ENDPOINT
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: umami-env-container-db
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: umami-env-container-db
key: DB_PASSWORD
- name: DATABASE_URL
value: "postgres://$(DB_USERNAME):$(DB_PASSWORD)@$(DB_HOST):$(DB_PORT)/$(DB_DATABASE)"
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /api/heartbeat
port: 3000
initialDelaySeconds: 30
periodSeconds: 15
readinessProbe:
httpGet:
path: /api/heartbeat
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
And the Service:
apiVersion: v1
kind: Service
metadata:
name: umami-server
namespace: umami
spec:
selector:
app: umami
ports:
- port: 3000
Cloudflare Tunnel Config
The blog runs behind a Cloudflare Tunnel. The tunnel config is stored as a ConfigMap and consumed by a cloudflared Deployment. Traffic for umami.uclab.dev is routed directly to the Umami service by cluster DNS.
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflared
data:
config.yaml: |
tunnel: uclab
credentials-file: /etc/cloudflared/creds/credentials.json
metrics: 0.0.0.0:2000
no-autoupdate: true
ingress:
- hostname: uclab.dev
service: http://uclab:80
- hostname: umami.uclab.dev
service: http://umami-server.umami.svc.cluster.local:3000
Tracking Script in Hugo
In layouts/partials/head.html:
<script
defer
src="https://umami.uclab.dev/x.js"
data-website-id="YOUR_WEBSITE_ID">
</script>
The Gotcha: Ad Blockers
After deploying everything and verifying Umami was reachable, the dashboard still showed zero visitors. The culprit was Ghostery (and by extension any privacy-focused browser extension like uBlock Origin or Privacy Badger). These tools maintain blocklists that match common analytics script patterns — /script.js from a known analytics domain is a dead giveaway.
Umami’s documentation covers this cleanly. You can rename both the tracker script and the collection endpoint using environment variables:
TRACKER_SCRIPT_NAME— renamesscript.jsto anything you want (e.g.x.js)COLLECT_API_ENDPOINT— renames/api/sendto anything you want (e.g./api/x)
The tracker script automatically picks up the custom endpoint, so no additional data-api attribute is needed in the script tag. Setting these two env vars and redeploying was all it took. Ghostery no longer blocks the script because x.js from a custom domain doesn’t match any known analytics pattern.
This is also the approach recommended over proxying the script through your own domain — less infrastructure, same result.
Verification
With Ghostery enabled, open DevTools → Network tab and reload your blog. You should see:
x.jsloading with status200- A POST to
/api/xfiring immediately after with status200
And within a minute, your Umami dashboard should start showing live visitors.