A self-hosted view counter for a static Hugo blog — built from scratch in Python, containerized, signed, deployed to k3s via GitOps, and integrated into the Hugo frontend. No third-party services. Every piece runs in the cluster.

Architecture
Visitor loads post
↓
Hugo frontend JS → POST /views/{slug} → view-counter API
→ GET /views/{slug} → display count
↓
FastAPI (Python)
↓
PostgreSQL (CNPG)
2 instances, WAL archiving to MinIO
CI/CD flow:
git push → Forgejo CI
→ multi-stage Docker build
→ Cosign image signing
→ pi5cluster repo image digest update
→ Flux GitOps reconciles
→ Kubernetes rolling deployment
Python Application
The service is built with FastAPI and asyncpg, managed with uv and structured as a proper Python package.
Project structure:
view-counter/
├── src/
│ └── view_counter/
│ ├── __init__.py
│ ├── app.py
│ ├── database.py
│ └── routes.py
├── main.py
├── pyproject.toml
├── uv.lock
├── Dockerfile
└── .forgejo/
└── workflows/
└── build.yml
pyproject.toml:
[project]
name = "view-counter"
version = "0.1.0"
description = "View counter microservice for uclab.dev"
requires-python = ">=3.12"
dependencies = [
"asyncpg>=0.31.0",
"fastapi>=0.135.1",
"python-dotenv>=1.2.2",
"uvicorn>=0.42.0",
]
[dependency-groups]
dev = [
"httpx>=0.28.1",
"pytest>=9.0.2",
"pytest-asyncio>=1.3.0",
]
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
src/view_counter/database.py:
import asyncpg
import os
_pool: asyncpg.Pool | None = None
async def get_pool() -> asyncpg.Pool:
global _pool
if _pool is None:
_pool = await asyncpg.create_pool(
host=os.getenv("DB_HOST", "localhost"),
port=int(os.getenv("DB_PORT", 5432)),
database=os.getenv("DB_NAME", "view-counter"),
user=os.getenv("DB_USER", "view-counter"),
password=os.getenv("DB_PASSWORD", ""),
min_size=2,
max_size=10,
)
return _pool
async def init_db() -> None:
try:
pool = await get_pool()
async with pool.acquire() as conn:
await conn.execute("""
CREATE TABLE IF NOT EXISTS views (
slug TEXT PRIMARY KEY,
count BIGINT NOT NULL DEFAULT 0,
last_seen TIMESTAMPTZ NOT NULL DEFAULT now()
)
""")
except Exception as e:
print(f"WARNING: DB not available: {e}")
print("App starting without DB — endpoints will fail until DB is reachable")
async def close_pool() -> None:
global _pool
if _pool is not None:
await _pool.close()
_pool = None
src/view_counter/routes.py:
from fastapi import APIRouter
from .database import get_pool
router = APIRouter()
@router.get("/health")
async def health():
return {"status": "ok"}
@router.get("/views")
async def get_all_views():
pool = await get_pool()
async with pool.acquire() as conn:
rows = await conn.fetch(
"SELECT slug, count FROM views ORDER BY count DESC"
)
return [{"slug": r["slug"], "count": r["count"]} for r in rows]
@router.get("/views/{slug:path}")
async def get_views(slug: str):
pool = await get_pool()
async with pool.acquire() as conn:
row = await conn.fetchrow(
"SELECT count FROM views WHERE slug = $1", slug
)
return {"slug": slug, "count": row["count"] if row else 0}
@router.post("/views/{slug:path}")
async def increment_views(slug: str):
pool = await get_pool()
async with pool.acquire() as conn:
row = await conn.fetchrow("""
INSERT INTO views (slug, count, last_seen)
VALUES ($1, 1, now())
ON CONFLICT (slug) DO UPDATE
SET count = views.count + 1,
last_seen = now()
RETURNING count
""", slug)
return {"slug": slug, "count": row["count"]}
The {slug:path} parameter type tells FastAPI to accept forward slashes
in the slug — necessary because Hugo post URLs include the full path
like posts/my-post-title/.
src/view_counter/app.py:
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from .database import init_db, close_pool
from .routes import router
@asynccontextmanager
async def lifespan(app: FastAPI):
await init_db()
yield
await close_pool()
def create_app() -> FastAPI:
app = FastAPI(
title="View Counter",
version="0.1.0",
lifespan=lifespan,
)
app.add_middleware(
CORSMiddleware,
allow_origins=[
"https://uclab.dev",
"http://localhost:1313",
],
allow_methods=["GET", "POST"],
allow_headers=["*"],
)
app.include_router(router)
return app
main.py:
import uvicorn
from src.view_counter.app import create_app
app = create_app()
if __name__ == "__main__":
uvicorn.run(
"main:app",
host="0.0.0.0",
port=9313,
reload=True,
)
Dockerfile
Multi-stage build using the official uv image as builder. The final image runs as a non-root user with a read-only root filesystem.
FROM python:3.12-slim AS builder
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
WORKDIR /app
COPY pyproject.toml uv.lock ./
RUN uv sync --frozen --no-dev --no-cache
COPY src/ ./src/
COPY main.py ./
FROM python:3.12-slim
RUN useradd --no-create-home --shell /bin/false appuser
WORKDIR /app
COPY --from=builder /app ./
COPY --from=builder /app/.venv ./.venv
ENV PATH="/app/.venv/bin:$PATH"
ENV PYTHONUNBUFFERED=1
USER appuser
EXPOSE 9313
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "9313"]
Key decisions:
- uv resolves dependencies deterministically from
uv.lock --frozenensures the lockfile is always honoured in CI--no-devexcludes test dependencies from the production image- Non-root
appusersatisfies the cluster pod security policy readOnlyRootFilesystem: trueenforced at the Kubernetes level
Forgejo CI Pipeline
The pipeline builds the image, signs it with Cosign, and updates the image digest in the GitOps repo to trigger a Flux reconciliation.
.forgejo/workflows/build.yml:
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: forgejo.uclab.dev
IMAGE: forgejo.uclab.dev/affragak/view-counter
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Set image tag
run: |
SHORT_SHA=$(git rev-parse --short HEAD)
echo "TAG=${SHORT_SHA}" >> $GITHUB_ENV
- name: Login to Forgejo registry
run: echo "${{ secrets.REGISTRY_TOKEN }}" | docker login ${{ env.REGISTRY }} -u ${{ secrets.REGISTRY_USER }} --password-stdin
- name: Build and push image
run: |
docker build -t ${{ env.IMAGE }}:${{ env.TAG }} .
docker push ${{ env.IMAGE }}:${{ env.TAG }}
DIGEST=$(docker inspect --format='{{index .RepoDigests 0}}' ${{ env.IMAGE }}:${{ env.TAG }})
echo "IMAGE_WITH_DIGEST=${DIGEST}" >> $GITHUB_ENV
- name: Install Cosign
uses: sigstore/[email protected]
- name: Sign image with Cosign
env:
COSIGN_PASSWORD: ${{ secrets.COSIGN_PASSWORD }}
run: |
echo "${{ secrets.COSIGN_PRIVATE_KEY }}" > /tmp/cosign.key
cosign sign --yes --key /tmp/cosign.key \
--registry-referrers-mode=legacy \
--registry-username ${{ secrets.REGISTRY_USER }} \
--registry-password ${{ secrets.REGISTRY_TOKEN }} \
${{ env.IMAGE_WITH_DIGEST }}
rm /tmp/cosign.key
- name: Setup SSH for GitHub
run: |
mkdir -p ~/.ssh
echo "${{ secrets.GH_DEPLOY_KEY }}" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan github.com >> ~/.ssh/known_hosts
- name: Clone k3s repo and update image tag
run: |
git clone [email protected]:affragak/pi5cluster.git /tmp/pi5cluster
cd /tmp/pi5cluster
git config user.name "forgejo-ci"
git config user.email "[email protected]"
DEPLOY_FILE="apps/base/view-counter/deployment.yaml"
sed -i "s|image: .*|image: ${{ env.IMAGE_WITH_DIGEST }}|" $DEPLOY_FILE
git add $DEPLOY_FILE
git diff --staged --quiet && echo "No changes to commit" && exit 0
git commit -m "chore: deploy view-counter ${{ env.TAG }}"
git push
Required Forgejo repository secrets:
| Secret | Purpose |
|---|---|
REGISTRY_USER |
Forgejo registry username |
REGISTRY_TOKEN |
Forgejo token with write:packages scope |
COSIGN_PRIVATE_KEY |
Cosign private key stored in Vault |
COSIGN_PASSWORD |
Cosign key password |
GH_DEPLOY_KEY |
SSH deploy key for pi5cluster repo |
Kubernetes Manifests
All manifests live in apps/base/view-counter/ in the pi5cluster Flux
repo and are reconciled by Flux automatically on every push.
namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: view-counter
secret.yaml — ExternalSecret synced from Vault via External Secrets
Operator:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: view-counter-container-env
namespace: view-counter
spec:
refreshInterval: "15s"
secretStoreRef:
name: vault-backend-global
kind: ClusterSecretStore
target:
name: view-counter-container-env
creationPolicy: Owner
data:
- secretKey: DB_USER
remoteRef:
key: view-counter-container-env
property: username
- secretKey: DB_PASSWORD
remoteRef:
key: view-counter-container-env
property: password
Store credentials in Vault before deploying:
vault kv put -mount=apps view-counter-container-env \
username=view-counter \
password=$(openssl rand -base64 24)
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: viewcounter-configmap
data:
DB_NAME: view-counter
DB_PORT: "5432"
DB_HOST: view-counter-db-rw.view-counter.svc.cluster.local
database.yaml — CNPG Cluster with two instances and WAL archiving:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: view-counter-db
namespace: view-counter
spec:
instances: 2
bootstrap:
initdb:
database: view-counter
owner: view-counter
secret:
name: view-counter-container-env
storage:
size: 1Gi
monitoring:
enablePodMonitor: true
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: view-counter
namespace: view-counter
spec:
replicas: 1
selector:
matchLabels:
app: view-counter
template:
metadata:
labels:
app: view-counter
spec:
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: view-counter
image: forgejo.uclab.dev/affragak/view-counter:placeholder
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: [ALL]
ports:
- containerPort: 9313
envFrom:
- configMapRef:
name: viewcounter-configmap
- secretRef:
name: view-counter-container-env
resources:
requests:
cpu: 20m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
livenessProbe:
httpGet:
path: /health
port: 9313
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 9313
initialDelaySeconds: 5
periodSeconds: 10
imagePullSecrets:
- name: forgejo-registry
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: viewcounter-server
namespace: view-counter
spec:
selector:
app: view-counter
ports:
- port: 9313
targetPort: 9313
type: ClusterIP
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- secret.yaml
- configmap.yaml
- database.yaml
- deployment.yaml
- service.yaml
Cloudflare Tunnel
The service has no public port. Traffic reaches it exclusively through the Cloudflare Tunnel. Add the route to the cloudflared ConfigMap in the Flux repo:
ingress:
- hostname: viewcounter.uclab.dev
service: http://viewcounter-server.view-counter.svc.cluster.local:9313
Add a CNAME DNS record in Cloudflare pointing viewcounter to your
tunnel’s .cfargotunnel.com address. Restart cloudflared:
kubectl rollout restart deployment cloudflared -n cloudflared
curl -s https://viewcounter.uclab.dev/health
# {"status":"ok"}
Hugo Integration
Reading time comes from Hugo’s native .ReadingTime — zero extra
infrastructure. The view counter is a small JavaScript snippet that
calls the API on every post load.
Add to config.toml:
[params]
viewCounterHost = "https://viewcounter.uclab.dev"
Add to the aside block in layouts/_default/single.html, above the
date:
<p>
<span>{{ .ReadingTime }} min read</span>
·
<span id="view-count"></span>
</p>
<script>
(async () => {
const slug = '{{ .RelPermalink }}'.replace(/^\//, '');
const base = '{{ .Site.Params.viewCounterHost }}';
try {
await fetch(base + '/views/' + slug, { method: 'POST' });
const res = await fetch(base + '/views/' + slug);
const data = await res.json();
document.getElementById('view-count').textContent = data.count + ' views';
} catch(e) {
document.getElementById('view-count').textContent = '';
}
})();
</script>
The script POSTs to increment then GETs to display. The try/catch
silently hides any failure — a microservice hiccup never affects the
reading experience. The leading slash is stripped from .RelPermalink
since FastAPI’s {slug:path} parameter handles the remaining segments.
Verification
# Health check
curl https://viewcounter.uclab.dev/health
# {"status":"ok"}
# Increment a counter
curl -X POST "https://viewcounter.uclab.dev/views/posts/my-post/"
# {"slug":"posts/my-post/","count":1}
# Read a counter
curl "https://viewcounter.uclab.dev/views/posts/my-post/"
# {"slug":"posts/my-post/","count":1}
# All counters ranked by views
curl "https://viewcounter.uclab.dev/views"
# [{"slug":"posts/my-post/","count":1}]
# Database cluster health
kubectl cnpg status view-counter-db -n view-counter
What This Demonstrates
This project covers the full DevOps lifecycle for a greenfield microservice:
Code — Python FastAPI structured as a proper package with uv for dependency management. Async database access via asyncpg with connection pooling. CORS configured for the exact origins that need access. Graceful startup without a database connection so the container passes readiness probes even before the CNPG cluster is ready.
Container — Multi-stage Dockerfile with uv as the build tool. Final image is 50MB, runs as non-root with a read-only root filesystem. The CI runner builds natively for the target architecture.
Supply chain — Every image is signed with Cosign before being
pushed. The signing key lives in Vault and is injected into the CI
runner’s ephemeral /tmp for the duration of the signing step then
deleted. The ClusterImagePolicy enforces signature verification at
admission time for namespaces that carry the
policy.sigstore.dev/include: "true" label.
GitOps — The CI pipeline never deploys directly to the cluster. It updates the image digest in the pi5cluster repo and Flux reconciles the change on its next sync cycle. The cluster state is always the source of truth in Git. A rolling update replaces the old pod only after the new one passes its readiness probe.
Database — PostgreSQL managed by CloudNative-PG with two instances, streaming replication across two NUC nodes, and WAL archiving to MinIO for point-in-time recovery. The schema is created at application startup via asyncpg — no migration tooling needed for a single table.
Secrets — Credentials stored in Vault, synced into the cluster via External Secrets Operator with a 15 second refresh interval. The ExternalSecret manifest is safe to commit to Git — it contains only a reference to the Vault path, never the value.
Networking — ClusterIP service, no NodePort or LoadBalancer. The only ingress path is through the Cloudflare Tunnel which terminates TLS and enforces zero-trust access policies before traffic reaches the pod.
Observability — CNPG enablePodMonitor: true wires Prometheus
scraping automatically. Liveness and readiness probes on /health give
Kubernetes accurate signal for traffic routing and restart decisions.