AKS with Azure Key Vault Using Terraform

Overview

In this post I’ll walk through the Terraform configuration I used to spin up an AKS cluster for my uclabdev environment as part of an n8n self-hosted setup on Azure. The config covers:

  • An AKS cluster with Cilium as both the CNI and network policy engine
  • The Azure Key Vault Secrets Provider add-on enabled on the cluster
  • An Azure Key Vault instance with RBAC-based access control
  • Proper role assignments for both the operator and the AKS managed identity
aks-terraform

Authenticating with Azure CLI

Before running Terraform, you need an active Azure CLI session. Since this setup runs inside a dev container, browser-based login isn’t always available — device code flow is the reliable fallback:

az login --use-device-code

You’ll get output like this:

To sign in, use a web browser to open the page https://login.microsoft.com/device
and enter the code ABCXYZ123 to authenticate.

Open the URL on any device, enter the code, and authenticate with your Microsoft account. Once complete, the CLI confirms the tenant and subscription selection:

[Tenant and subscription selection]

No     Subscription name     Subscription ID                       Tenant
-----  --------------------  ------------------------------------  -----------------
[1] *  Azure subscription 1  xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  Default Directory

Tenant: Default Directory
Subscription: Azure subscription 1 (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)

Just press Enter to confirm the default, and you’re in.

To verify the active session and confirm which subscription Terraform will target:

az account list

Expected output (with sensitive values redacted):

[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "isDefault": true,
    "name": "Azure subscription 1",
    "state": "Enabled",
    "tenantDefaultDomain": "myorg.onmicrosoft.com",
    "tenantDisplayName": "Default Directory",
    "user": {
      "name": "[email protected]",
      "type": "user"
    }
  }
]

Prerequisites

  • Terraform >= 1.0
  • Azure CLI authenticated (az login)
  • An Azure subscription

The provider block targets hashicorp/azurerm ~> 4.0.

terraform {
  required_version = ">= 1.0"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 4.0"
    }
  }
}

provider "azurerm" {
  features {}
  subscription_id = "azure-subscription-id"
}

Resource Group

Everything lives in a single resource group in Germany West Central:

resource "azurerm_resource_group" "aks" {
  name     = "rg-cloud-uclabdev-aks"
  location = "Germany West Central"
}

Keeping AKS and Key Vault in the same resource group simplifies RBAC scoping and makes teardown clean.


AKS Cluster

resource "azurerm_kubernetes_cluster" "main" {
  name                = "uclabdev-cluster"
  location            = azurerm_resource_group.aks.location
  resource_group_name = azurerm_resource_group.aks.name
  dns_prefix          = "uclabdev"
  kubernetes_version  = "1.33.6"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2s_v3"
  }

  identity {
    type = "SystemAssigned"
  }

  network_profile {
    network_plugin     = "azure"
    network_policy     = "cilium"
    network_data_plane = "cilium"
  }

  key_vault_secrets_provider {
    secret_rotation_enabled = false
  }
}

A few things worth calling out here:

Cilium as network policy and data plane — Using network_policy = "cilium" alongside network_data_plane = "cilium" enables eBPF-based networking. This replaces iptables for packet processing and gives you richer observability and policy enforcement compared to the default azure or calico options. For a dev environment it’s a great way to get familiar with Cilium before running it in production.

System-assigned managed identity — Rather than service principals, the cluster uses a system-assigned identity. This keeps credential management inside Azure and makes the Key Vault integration much simpler (no secrets to rotate for the cluster itself).

Key Vault Secrets Provider — Enabling key_vault_secrets_provider installs the Secrets Store CSI Driver on the cluster and creates a user-assigned managed identity (secret_identity) that we’ll use for RBAC later. Secret rotation is disabled here since this is a dev cluster.


Azure Key Vault

data "azurerm_client_config" "current" {}

resource "azurerm_key_vault" "uclabdev_vault" {
  name                = "kv-n8n-uclabdev"
  location            = azurerm_resource_group.aks.location
  resource_group_name = azurerm_resource_group.aks.name
  tenant_id           = data.azurerm_client_config.current.tenant_id
  sku_name            = "standard"

  soft_delete_retention_days = 7
  purge_protection_enabled   = false
  rbac_authorization_enabled = true

  depends_on = [azurerm_kubernetes_cluster.main]
}

Key decisions:

  • rbac_authorization_enabled = true — This opts into the modern RBAC-based access model instead of legacy vault access policies. It means all permissions are managed via Azure role assignments, which is consistent with how the rest of the Azure platform works and much easier to audit.
  • purge_protection_enabled = false — Combined with a short soft_delete_retention_days = 7, this makes iterative dev/destroy cycles painless. You wouldn’t want this in production.
  • depends_on — Explicit dependency on the AKS cluster ensures the secrets provider identity exists before we try to assign roles against it.

RBAC Role Assignments

Two role assignments wire everything together:

# Operator / Terraform runner gets full admin rights
resource "azurerm_role_assignment" "kv_admin" {
  scope                = azurerm_key_vault.uclabdev_vault.id
  role_definition_name = "Key Vault Administrator"
  principal_id         = data.azurerm_client_config.current.object_id
}

# AKS secrets provider identity gets read-only access
resource "azurerm_role_assignment" "aks_keyvault_secrets_provider" {
  scope                = azurerm_key_vault.uclabdev_vault.id
  role_definition_name = "Key Vault Secrets User"
  principal_id         = azurerm_kubernetes_cluster.main.key_vault_secrets_provider[0].secret_identity[0].object_id
}

Key Vault Administrator — Assigned to the identity running Terraform (your own user or a service principal). This lets Terraform create and manage secrets in the vault without needing to toggle access policies.

Key Vault Secrets User — Assigned to the managed identity that the Secrets Store CSI Driver uses when pods request secrets. Secrets User is the least-privilege role for reading secret values — it cannot list, create, or delete them.

The secret_identity[0].object_id is the object ID of the user-assigned managed identity that AKS creates automatically when you enable key_vault_secrets_provider. Terraform surfaces it through the cluster resource output.


How It Fits Together

Once this is applied, the flow for a pod consuming a secret looks like this:

  1. Pod references a SecretProviderClass resource that points to the Key Vault and a specific secret name.
  2. The Secrets Store CSI Driver uses the secret_identity managed identity to authenticate to Key Vault.
  3. Azure RBAC confirms Key Vault Secrets User is assigned → secret value is returned.
  4. The CSI Driver mounts the secret as a volume (or syncs it to a Kubernetes Secret).

Applying It

terraform init
terraform plan
terraform apply

After apply, grab the kubeconfig:

az aks get-credentials \
  --resource-group rg-cloud-uclabdev-aks \
  --name uclabdev-cluster \
  --overwrite-existing

my DevOps Odyssey

“Σα βγεις στον πηγαιμό για την Ιθάκη, να εύχεσαι να ‘ναι μακρύς ο δρόμος, γεμάτος περιπέτειες, γεμάτος γνώσεις.” - Kavafis’ Ithaka.



A walkthrough of provisioning an AKS cluster with Cilium networking and Azure Key Vault secrets provider using Terraform — from resource group to RBAC assignments.

2026-03-10

Series:lab

Categories:Kubernetes

Tags:#aks, #azure, #vault, #terraform


AKS with Azure Key Vault Using Terraform: