Overview
In the previous post, I walked through the foundational Terraform config for an AKS cluster with Azure Key Vault integration. That was a minimal dev setup — one node, no GitOps, no application secrets.
This post covers the staging evolution of that config. The new additions are:
- A 2-node AKS cluster with a proper staging DNS prefix
- Flux v2 installed as a cluster extension and wired to a private GitHub repo via SSH
- A structured Kustomization hierarchy — controllers → configs → apps — with garbage collection
- Auto-generated DB credentials stored directly in Key Vault at provision time
- Terraform outputs that expose Key Vault details and the secrets provider client ID for use in
SecretProviderClassmanifests
What Changed from Dev
The resource group and cluster names now reflect the staging environment (rg-n8n-uclabdev-aks, n8n-uclabdev-staging), the node count is bumped to 2, and the DNS prefix is staging. Everything else — Cilium networking, system-assigned identity, Key Vault secrets provider — carries over unchanged.
The random provider is also added alongside azurerm:
terraform {
required_version = ">= 1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 4.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
The random provider is used later to generate the database password — more on that below.
resource "azurerm_kubernetes_cluster" "main" {
name = "n8n-uclabdev-staging"
location = azurerm_resource_group.aks.location
resource_group_name = azurerm_resource_group.aks.name
dns_prefix = "staging"
kubernetes_version = "1.33.6"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_D2s_v3"
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
network_policy = "cilium"
network_data_plane = "cilium"
}
key_vault_secrets_provider {
secret_rotation_enabled = false
}
}
Flux GitOps
Prerequisites
Before Terraform can create the Flux extension, the Microsoft.KubernetesConfiguration resource provider must be registered in your subscription. This is a one-time step:
az provider register --namespace Microsoft.KubernetesConfiguration
Registration is async and can take a few minutes. Check status with:
az provider show --namespace Microsoft.KubernetesConfiguration --query registrationState
Cluster Extension
Flux is installed as a first-class AKS cluster extension rather than via the flux CLI or a Helm release. This means Azure manages the Flux system components and their lifecycle:
resource "azurerm_kubernetes_cluster_extension" "flux" {
name = "n8n-uclabdev-flux"
cluster_id = azurerm_kubernetes_cluster.main.id
extension_type = "microsoft.flux"
}
Using the managed extension has a practical benefit: it avoids a chicken-and-egg problem where you’d otherwise need to bootstrap Flux separately before Terraform can reconcile anything against the cluster.
Flux Configuration
With the extension in place, the azurerm_kubernetes_flux_configuration resource points Flux at the GitOps repository:
resource "azurerm_kubernetes_flux_configuration" "main" {
name = "n8n-uclabdev-system"
cluster_id = azurerm_kubernetes_cluster.main.id
namespace = "flux-system"
git_repository {
url = "ssh://[email protected]/yourorg/n8n-uclabdev-gitops"
reference_type = "branch"
reference_value = "main"
ssh_private_key_base64 = base64encode(file("~/.ssh/n8n-gitops-deploy-key"))
}
kustomizations {
name = "infra-controllers"
path = "./infrastructure/controllers/staging"
sync_interval_in_seconds = 300
garbage_collection_enabled = true
}
kustomizations {
name = "infra-configs"
path = "./infrastructure/configs/staging"
sync_interval_in_seconds = 300
depends_on = ["infra-controllers"]
garbage_collection_enabled = true
}
kustomizations {
name = "apps"
path = "./apps/staging"
sync_interval_in_seconds = 300
depends_on = ["infra-configs"]
garbage_collection_enabled = true
}
scope = "cluster"
depends_on = [azurerm_kubernetes_cluster_extension.flux]
}
A few things to call out:
SSH authentication — The deploy key is read from disk at plan time and base64-encoded inline. The private key file (~/.ssh/n8n-gitops-deploy-key) must exist on the machine running Terraform. The corresponding public key needs to be added as a deploy key on the GitHub repository with read access.
Kustomization order — The three kustomizations model the classic Flux layered approach:
infra-controllers— installs cluster-wide controllers (cert-manager, external-secrets, ingress-nginx, etc.)infra-configs— applies configuration that depends on those controllers being ready (issuers, ingress classes, etc.)apps— deploys application workloads (n8n and its dependencies), which depend on config being in place
Each layer depends on the previous via depends_on, so Flux won’t attempt to reconcile apps before the controllers that serve them are healthy.
garbage_collection_enabled = true — Resources removed from the Git repo will be pruned from the cluster automatically. This keeps the cluster state honest and avoids orphaned resources accumulating over time.
Sync interval — 5 minutes (300s) is a reasonable default for staging. Tighten this for faster feedback during active development, or loosen it for stable environments.
Key Vault and DB Credentials
Key Vault
The vault config is largely the same as before, with the name updated for staging:
resource "azurerm_key_vault" "n8n-uclabdev_vault" {
name = "kv-n8n-uclabdev-staging"
location = azurerm_resource_group.aks.location
resource_group_name = azurerm_resource_group.aks.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
soft_delete_retention_days = 7
purge_protection_enabled = false
rbac_authorization_enabled = true
depends_on = [azurerm_kubernetes_cluster.main]
}
RBAC assignments are identical to the dev config — Key Vault Administrator for the Terraform operator, Key Vault Secrets User for the AKS secrets provider managed identity.
Auto-Generated DB Password
Rather than hardcoding credentials or managing them out-of-band, the DB password is generated by Terraform and stored directly in Key Vault at provision time:
resource "random_password" "n8n-uclabdev_db_password" {
length = 24
special = false
lifecycle {
ignore_changes = all
}
}
resource "azurerm_key_vault_secret" "n8n-uclabdev_db_user" {
name = "n8n-uclabdev-db-user"
value = "app"
key_vault_id = azurerm_key_vault.n8n-uclabdev_vault.id
depends_on = [azurerm_role_assignment.kv_admin]
}
resource "azurerm_key_vault_secret" "n8n-uclabdev_db_password" {
name = "n8n-uclabdev-db-password"
value = random_password.n8n-uclabdev_db_password.result
key_vault_id = azurerm_key_vault.n8n-uclabdev_vault.id
depends_on = [azurerm_role_assignment.kv_admin]
}
The lifecycle { ignore_changes = all } block on the password resource is important — it prevents Terraform from regenerating the password on subsequent apply runs, which would rotate the credential and break any running workload. The password is generated once, stored in Key Vault, and left alone from that point forward.
special = false avoids characters that can cause issues in PostgreSQL connection strings. Adjust to true if your application handles escaping correctly.
The depends_on = [azurerm_role_assignment.kv_admin] on both secrets ensures the Key Vault Administrator role assignment has propagated before Terraform attempts to write to the vault. Azure RBAC propagation can lag by a few seconds — without this, secret writes occasionally fail with a 403 even though the role technically exists.
Terraform Outputs
Three outputs are exposed for use downstream — either by other Terraform configs or when configuring SecretProviderClass manifests manually:
output "key_vault_name" {
value = azurerm_key_vault.n8n-uclabdev_vault.name
}
output "key_vault_uri" {
value = azurerm_key_vault.n8n-uclabdev_vault.vault_uri
}
output "aks_keyvault_secrets_provider_client_id" {
value = azurerm_kubernetes_cluster.main.key_vault_secrets_provider[0].secret_identity[0].client_id
description = "AKS Key Vault Secrets Provider Client ID for use in SecretProviderClass"
}
The most useful of these is aks_keyvault_secrets_provider_client_id — it’s the client ID of the managed identity that pods use to authenticate to Key Vault when requesting secrets via the CSI driver. You’ll need this value when authoring SecretProviderClass resources:
terraform output aks_keyvault_secrets_provider_client_id
Reference it in your SecretProviderClass like so:
parameters:
clientID: "<value from terraform output>"
keyvaultName: "<value from terraform output key_vault_name>"
tenantId: "<your tenant ID>"
Applying It
terraform init
terraform plan
terraform apply
After apply, verify Flux is reconciling correctly:
az aks get-credentials \
--resource-group rg-n8n-uclabdev-aks \
--name n8n-uclabdev-staging \
--overwrite-existing
flux get kustomizations
You should see all three kustomizations (infra-controllers, infra-configs, apps) in a Ready state.