I recently needed to migrate my HashiCorp Vault instance from a dedicated Ubuntu VM to a Docker container running on my Synology DS918+ NAS. This post covers the entire process: setting up Vault on Synology, configuring it behind the built-in reverse proxy, creating a proper policy and user structure, and finally migrating all secrets from the old instance — including re-connecting everything to an External Secrets Operator running in a k3s cluster.
Why Vault on Synology?
My Synology NAS runs 24/7 anyway, has Docker support via Container Manager, and I wanted to reduce the number of always-on VMs in my homelab. Vault is a perfect candidate — it’s lightweight when idle and doesn’t need dedicated compute.
Part 1: Preparing the Synology NAS
1.1 Folder Setup
Before touching Container Manager, create the directory structure for Vault’s data and configuration.
- Open File Station
- Navigate to your
dockershared folder - Create a folder named
vault - Inside
vault, create two sub-folders:configfile— this is where Vault’s encrypted storage lives
Your structure should look like:
docker/
└── vault/
├── config/
└── file/
1.2 Download the Vault Image
- Open Container Manager → go to the Registry tab
- Search for
vault - Select the official
hashicorp/vaultimage and click Download - Choose the
latesttag (or pin a specific version if you prefer)
Note: If the download button does nothing, HashiCorp may have moved the image. Try adding
https://registry.hashicorp.comas a custom registry under Registry → Settings → Add.
Part 2: Configuring the Container
Once the image is downloaded, go to Image, select hashicorp/vault, and click Run.
General Settings
| Setting | Value |
|---|---|
| Container Name | vault |
| Auto-Restart | ✅ Enabled |
Volumes
Map your local folders into the container so data persists across restarts:
| Local Folder | Container Path | Type |
|---|---|---|
docker/vault/config |
/vault/config |
Read/Write |
docker/vault/file |
/vault/file |
Read/Write |
Ports
| Local Port | Container Port | Protocol |
|---|---|---|
8200 |
8200 |
TCP |
Environment Variables
This is the most important part. Add the following variables:
| Variable | Value | Description |
|---|---|---|
VAULT_ADDR |
http://0.0.0.0:8200 |
Vault’s listen address |
VAULT_API_ADDR |
http://[YOUR_NAS_IP]:8200 |
Replace with your NAS IP |
SKIP_SETCAP |
true |
Required for Synology’s Docker environment |
VAULT_LOCAL_CONFIG |
(see below) | Tells Vault where to store data |
For VAULT_LOCAL_CONFIG, paste this JSON as the value:
{
"storage": {
"file": {
"path": "/vault/file"
}
},
"listener": [
{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}
],
"ui": true,
"disable_mlock": true
}
tls_disable: 1is intentional here — TLS will be handled by the Synology reverse proxy in the next step.
Execution Command
In the command/capabilities section, set the execution command to:
server
Start the container.
Part 3: HTTPS via Synology Reverse Proxy
Your Vault container speaks plain HTTP internally. The Synology reverse proxy will terminate TLS and forward traffic over HTTP to the container.
3.1 Create the Reverse Proxy Rule
Go to Control Panel → Login Portal → Advanced → Reverse Proxy → Create:
Source (public-facing):
| Field | Value |
|---|---|
| Protocol | HTTPS |
| Hostname | vault.yourdomain.com |
| Port | 443 |
| Enable HSTS | ✅ |
Destination (Vault container):
| Field | Value |
|---|---|
| Protocol | HTTP |
| Hostname | localhost |
| Port | 8200 |
3.2 Assign the Certificate
- Go to Control Panel → Security → Certificate
- Click Settings
- Find your
vault.yourdomain.comreverse proxy entry - Assign your Let’s Encrypt certificate to it
- Click OK
3.3 Update the API Address
Now that Vault is behind HTTPS, update the VAULT_API_ADDR variable so Vault generates correct links in its UI:
- Stop the container
- Go to Settings → Environment
- Change
VAULT_API_ADDRfrom the IP address tohttps://vault.yourdomain.com - Start the container again
Part 4: Initialization and Unsealing
Open your browser and navigate to https://vault.yourdomain.com.
You’ll see the Initialize Vault screen.
- Set your Key Shares and Key Threshold (e.g. 5 shares, 3 required)
- Click Initialize
- Download the keys file immediately — if you lose these unseal keys and root token, there is no recovery. Your data is gone forever.
- Use 3 of the 5 keys to Unseal the vault
Important: Vault re-seals itself every time it restarts (i.e. every time your NAS reboots). You must manually unseal it each time by entering 3 keys at
https://vault.yourdomain.com/ui.
Part 5: Vault Configuration (Secrets Engine, Auth, Policy)
Log in with your root token to complete the initial setup.
5.1 Enable the KV v2 Secrets Engine
- Go to Secrets Engines → Enable new engine
- Select KV → click Next
- Set Path to
apps - Ensure Version 2 is selected
- Click Enable Engine
5.2 Enable Userpass Authentication
- Go to Access → Auth Methods → Enable new method
- Select Userpass → click Next
- Leave the path as
userpass - Click Enable Method
5.3 Create an ACL Policy
- Go to Policies → Create ACL policy
- Name it
apps-manager - Paste the following HCL:
# Full access to KV v2 secrets under apps/
path "apps/data/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "apps/metadata/*" {
capabilities = ["list", "read", "delete"]
}
path "apps/*" {
capabilities = ["list"]
}
- Click Create policy
5.4 Create a User
- Go to Access → Auth Methods → userpass → Create user
- Set a username and password
- In the Policies field, assign
apps-manager - Click Create user
5.5 Test the User
Log out and log back in via Userpass method. Verify:
- ✅ You can see and create secrets under
apps/ - ❌ You cannot access Access or Policies (permission denied — expected)
Part 6: Migrating Secrets from the Old Vault
6.1 Export from the Old Vault
SSH into your old Vault server and run:
# List all secret paths
vault kv list -format=json apps/ | jq -r '.[]' > paths.txt
# Export all secrets into a single file
echo "{}" > all_secrets.json
while read p; do
vault kv get -format=json "apps/$p" \
| jq --arg path "$p" '.data.data | {($path): .}' >> all_secrets.json
done < paths.txt
You may see errors like
No value found at apps/data/forgejo-secrets— these are sub-folders, not secrets. Handle them separately (see below).
Copy all_secrets.json and paths.txt to your local machine.
6.2 Handle Sub-Folders
For any paths ending in / (sub-folders), run this on the old Vault:
for folder in forgejo-secrets netbox uclab-secrets; do
vault kv list -format=json "apps/$folder/" | jq -r '.[]' | while read p; do
vault kv get -format=json "apps/$folder/$p" \
| jq --arg path "$folder/$p" '.data.data | {($path): .}' >> all_secrets.json
done
done
6.3 Import into the New Vault
On your local machine, set up a Python virtual environment and import everything:
python3 -m venv venv
source venv/bin/activate
pip install hvac
Save the following as import_secrets.py:
import hvac
import json
# --- Edit these ---
VAULT_ADDR = "https://vault.yourdomain.com"
VAULT_TOKEN = "<YOUR-ROOT-TOKEN>"
JSON_FILE = "all_secrets.json"
MOUNT_POINT = "apps"
# ------------------
# Parse the concatenated JSON objects produced by the export script
secrets = {}
decoder = json.JSONDecoder()
with open(JSON_FILE, "r") as f:
content = f.read().strip()
pos = 0
while pos < len(content):
try:
obj, idx = decoder.raw_decode(content, pos)
secrets.update(obj)
content = content[pos + idx:].strip()
pos = 0
except json.JSONDecodeError:
break
# Remove any None values (sub-folders or failed exports)
secrets = {k: v for k, v in secrets.items() if v is not None}
print(f"Parsed {len(secrets)} secrets\n")
client = hvac.Client(url=VAULT_ADDR, token=VAULT_TOKEN)
if not client.is_authenticated():
print("Authentication failed — check your token and Vault address")
exit(1)
for path, data in secrets.items():
try:
client.secrets.kv.v2.create_or_update_secret(
path=path,
secret=data,
mount_point=MOUNT_POINT
)
print(f"✅ {MOUNT_POINT}/{path}")
except Exception as e:
print(f"❌ {MOUNT_POINT}/{path} → {e}")
print("\nDone!")
Run it:
python import_secrets.py
Part 7: Reconnecting External Secrets Operator
If you use the External Secrets Operator in Kubernetes, you’ll need to update your ClusterSecretStore to point to the new Vault and create a new token secret.
7.1 Create a New Vault Token
In the Vault UI, generate a token with the apps-manager policy (or use the root token temporarily), then create the Kubernetes secret:
kubectl create secret generic vault-token \
-n external-secrets \
--from-literal=token=<YOUR-VAULT-TOKEN>
7.2 Update the ClusterSecretStore
Update your ClusterSecretStore manifest with the new Vault address:
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
name: vault-backend-global
spec:
provider:
vault:
server: "https://vault.yourdomain.com" # Updated URL
path: "apps"
version: "v2"
auth:
tokenSecretRef:
name: vault-token
namespace: external-secrets
key: token
Critical: Make sure the URL uses
https://(no port). If Vault is behind a reverse proxy handling TLS, do not include:8200. Usinghttps://vault.yourdomain.com:8200will fail because the reverse proxy listens on 443, not 8200.
Commit and push — Flux will apply the change automatically.
7.3 Verify
kubectl get clustersecretstores.external-secrets.io
kubectl get externalsecrets.external-secrets.io -A
You should see STATUS: Valid on the store and SecretSynced on all external secrets.