Vault on a Synology NAS and Migrating Secrets

I recently needed to migrate my HashiCorp Vault instance from a dedicated Ubuntu VM to a Docker container running on my Synology DS918+ NAS. This post covers the entire process: setting up Vault on Synology, configuring it behind the built-in reverse proxy, creating a proper policy and user structure, and finally migrating all secrets from the old instance — including re-connecting everything to an External Secrets Operator running in a k3s cluster.

Why Vault on Synology?

My Synology NAS runs 24/7 anyway, has Docker support via Container Manager, and I wanted to reduce the number of always-on VMs in my homelab. Vault is a perfect candidate — it’s lightweight when idle and doesn’t need dedicated compute.


Part 1: Preparing the Synology NAS

1.1 Folder Setup

Before touching Container Manager, create the directory structure for Vault’s data and configuration.

  1. Open File Station
  2. Navigate to your docker shared folder
  3. Create a folder named vault
  4. Inside vault, create two sub-folders:
    • config
    • file — this is where Vault’s encrypted storage lives

Your structure should look like:

docker/
└── vault/
    ├── config/
    └── file/

1.2 Download the Vault Image

  1. Open Container Manager → go to the Registry tab
  2. Search for vault
  3. Select the official hashicorp/vault image and click Download
  4. Choose the latest tag (or pin a specific version if you prefer)

Note: If the download button does nothing, HashiCorp may have moved the image. Try adding https://registry.hashicorp.com as a custom registry under Registry → Settings → Add.


Part 2: Configuring the Container

Once the image is downloaded, go to Image, select hashicorp/vault, and click Run.

General Settings

Setting Value
Container Name vault
Auto-Restart ✅ Enabled

Volumes

Map your local folders into the container so data persists across restarts:

Local Folder Container Path Type
docker/vault/config /vault/config Read/Write
docker/vault/file /vault/file Read/Write

Ports

Local Port Container Port Protocol
8200 8200 TCP

Environment Variables

This is the most important part. Add the following variables:

Variable Value Description
VAULT_ADDR http://0.0.0.0:8200 Vault’s listen address
VAULT_API_ADDR http://[YOUR_NAS_IP]:8200 Replace with your NAS IP
SKIP_SETCAP true Required for Synology’s Docker environment
VAULT_LOCAL_CONFIG (see below) Tells Vault where to store data

For VAULT_LOCAL_CONFIG, paste this JSON as the value:

{
  "storage": {
    "file": {
      "path": "/vault/file"
    }
  },
  "listener": [
    {
      "tcp": {
        "address": "0.0.0.0:8200",
        "tls_disable": 1
      }
    }
  ],
  "ui": true,
  "disable_mlock": true
}

tls_disable: 1 is intentional here — TLS will be handled by the Synology reverse proxy in the next step.

Execution Command

In the command/capabilities section, set the execution command to:

server

Start the container.


Part 3: HTTPS via Synology Reverse Proxy

Your Vault container speaks plain HTTP internally. The Synology reverse proxy will terminate TLS and forward traffic over HTTP to the container.

3.1 Create the Reverse Proxy Rule

Go to Control Panel → Login Portal → Advanced → Reverse Proxy → Create:

Source (public-facing):

Field Value
Protocol HTTPS
Hostname vault.yourdomain.com
Port 443
Enable HSTS

Destination (Vault container):

Field Value
Protocol HTTP
Hostname localhost
Port 8200

3.2 Assign the Certificate

  1. Go to Control Panel → Security → Certificate
  2. Click Settings
  3. Find your vault.yourdomain.com reverse proxy entry
  4. Assign your Let’s Encrypt certificate to it
  5. Click OK

3.3 Update the API Address

Now that Vault is behind HTTPS, update the VAULT_API_ADDR variable so Vault generates correct links in its UI:

  1. Stop the container
  2. Go to Settings → Environment
  3. Change VAULT_API_ADDR from the IP address to https://vault.yourdomain.com
  4. Start the container again

Part 4: Initialization and Unsealing

Open your browser and navigate to https://vault.yourdomain.com.

You’ll see the Initialize Vault screen.

  1. Set your Key Shares and Key Threshold (e.g. 5 shares, 3 required)
  2. Click Initialize
  3. Download the keys file immediately — if you lose these unseal keys and root token, there is no recovery. Your data is gone forever.
  4. Use 3 of the 5 keys to Unseal the vault

Important: Vault re-seals itself every time it restarts (i.e. every time your NAS reboots). You must manually unseal it each time by entering 3 keys at https://vault.yourdomain.com/ui.


Part 5: Vault Configuration (Secrets Engine, Auth, Policy)

Log in with your root token to complete the initial setup.

5.1 Enable the KV v2 Secrets Engine

  1. Go to Secrets Engines → Enable new engine
  2. Select KV → click Next
  3. Set Path to apps
  4. Ensure Version 2 is selected
  5. Click Enable Engine

5.2 Enable Userpass Authentication

  1. Go to Access → Auth Methods → Enable new method
  2. Select Userpass → click Next
  3. Leave the path as userpass
  4. Click Enable Method

5.3 Create an ACL Policy

  1. Go to Policies → Create ACL policy
  2. Name it apps-manager
  3. Paste the following HCL:
# Full access to KV v2 secrets under apps/
path "apps/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "apps/metadata/*" {
  capabilities = ["list", "read", "delete"]
}

path "apps/*" {
  capabilities = ["list"]
}
  1. Click Create policy

5.4 Create a User

  1. Go to Access → Auth Methods → userpass → Create user
  2. Set a username and password
  3. In the Policies field, assign apps-manager
  4. Click Create user

5.5 Test the User

Log out and log back in via Userpass method. Verify:

  • ✅ You can see and create secrets under apps/
  • ❌ You cannot access Access or Policies (permission denied — expected)

Part 6: Migrating Secrets from the Old Vault

6.1 Export from the Old Vault

SSH into your old Vault server and run:

# List all secret paths
vault kv list -format=json apps/ | jq -r '.[]' > paths.txt

# Export all secrets into a single file
echo "{}" > all_secrets.json

while read p; do
  vault kv get -format=json "apps/$p" \
    | jq --arg path "$p" '.data.data | {($path): .}' >> all_secrets.json
done < paths.txt

You may see errors like No value found at apps/data/forgejo-secrets — these are sub-folders, not secrets. Handle them separately (see below).

Copy all_secrets.json and paths.txt to your local machine.

6.2 Handle Sub-Folders

For any paths ending in / (sub-folders), run this on the old Vault:

for folder in forgejo-secrets netbox uclab-secrets; do
  vault kv list -format=json "apps/$folder/" | jq -r '.[]' | while read p; do
    vault kv get -format=json "apps/$folder/$p" \
      | jq --arg path "$folder/$p" '.data.data | {($path): .}' >> all_secrets.json
  done
done

6.3 Import into the New Vault

On your local machine, set up a Python virtual environment and import everything:

python3 -m venv venv
source venv/bin/activate
pip install hvac

Save the following as import_secrets.py:

import hvac
import json

# --- Edit these ---
VAULT_ADDR  = "https://vault.yourdomain.com"
VAULT_TOKEN = "<YOUR-ROOT-TOKEN>"
JSON_FILE   = "all_secrets.json"
MOUNT_POINT = "apps"
# ------------------

# Parse the concatenated JSON objects produced by the export script
secrets = {}
decoder = json.JSONDecoder()

with open(JSON_FILE, "r") as f:
    content = f.read().strip()

pos = 0
while pos < len(content):
    try:
        obj, idx = decoder.raw_decode(content, pos)
        secrets.update(obj)
        content = content[pos + idx:].strip()
        pos = 0
    except json.JSONDecodeError:
        break

# Remove any None values (sub-folders or failed exports)
secrets = {k: v for k, v in secrets.items() if v is not None}
print(f"Parsed {len(secrets)} secrets\n")

client = hvac.Client(url=VAULT_ADDR, token=VAULT_TOKEN)

if not client.is_authenticated():
    print("Authentication failed — check your token and Vault address")
    exit(1)

for path, data in secrets.items():
    try:
        client.secrets.kv.v2.create_or_update_secret(
            path=path,
            secret=data,
            mount_point=MOUNT_POINT
        )
        print(f"✅  {MOUNT_POINT}/{path}")
    except Exception as e:
        print(f"❌  {MOUNT_POINT}/{path}{e}")

print("\nDone!")

Run it:

python import_secrets.py

Part 7: Reconnecting External Secrets Operator

If you use the External Secrets Operator in Kubernetes, you’ll need to update your ClusterSecretStore to point to the new Vault and create a new token secret.

7.1 Create a New Vault Token

In the Vault UI, generate a token with the apps-manager policy (or use the root token temporarily), then create the Kubernetes secret:

kubectl create secret generic vault-token \
  -n external-secrets \
  --from-literal=token=<YOUR-VAULT-TOKEN>

7.2 Update the ClusterSecretStore

Update your ClusterSecretStore manifest with the new Vault address:

apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
  name: vault-backend-global
spec:
  provider:
    vault:
      server: "https://vault.yourdomain.com"   # Updated URL
      path: "apps"
      version: "v2"
      auth:
        tokenSecretRef:
          name: vault-token
          namespace: external-secrets
          key: token

Critical: Make sure the URL uses https:// (no port). If Vault is behind a reverse proxy handling TLS, do not include :8200. Using https://vault.yourdomain.com:8200 will fail because the reverse proxy listens on 443, not 8200.

Commit and push — Flux will apply the change automatically.

7.3 Verify

kubectl get clustersecretstores.external-secrets.io
kubectl get externalsecrets.external-secrets.io -A

You should see STATUS: Valid on the store and SecretSynced on all external secrets.

my DevOps Odyssey

Logo

“Σα βγεις στον πηγαιμό για την Ιθάκη, να εύχεσαι να ‘ναι μακρύς ο δρόμος, γεμάτος περιπέτειες, γεμάτος γνώσεις.” - Kavafis’ Ithaka.



Running HashiCorp Vault on a Synology NAS with Docker and Migrating Secrets

7 min read  ·  · views

2026-04-04

Series:lab

Categories:Linux

Tags:#synology, #nas, #vault, #docker


Vault on a Synology NAS and Migrating Secrets: