Introduction
Hardcoded passwords in config files, API keys committed to Git, shared service account credentials passed around in Slack — these are the secrets management anti-patterns that eventually end careers and companies. HashiCorp Vault solves all of it.
Vault is an identity-based secrets and encryption management system. It provides a unified interface to every secret — database passwords, API keys, TLS certificates, SSH credentials — while maintaining tight access control and a complete audit log. It can generate dynamic credentials that expire automatically, eliminating the concept of a long-lived secret altogether.
This guide walks through a production-ready Vault deployment using Docker Compose with TLS, configures the AppRole authentication method for application workloads, sets up KV v2 and dynamic PostgreSQL secrets engines, and enforces least-privilege access via Vault policies.
Prerequisites
Before starting, ensure the following are in place:
- Docker Engine 24+ and Docker Compose v2
- A Linux host with at least 2 GB RAM and 10 GB free disk
jqinstalled (apt install jq/dnf install jq)- An internal CA or self-signed certificate (covered in Step 1)
- PostgreSQL instance accessible from the Vault host (for dynamic secrets)
- Ports 8200 (Vault API) and 8201 (cluster) available
Step 1: Generate a Self-Signed TLS Certificate
Vault must never run without TLS in a real environment. Generate a self-signed cert for the server:
mkdir -p ~/vault/{config,data,tls,logs}
# Generate private key and self-signed cert valid for 10 years
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 \
-nodes \
-keyout ~/vault/tls/vault.key \
-out ~/vault/tls/vault.crt \
-subj "/CN=vault.internal" \
-addext "subjectAltName=DNS:vault.internal,DNS:localhost,IP:127.0.0.1"
chmod 600 ~/vault/tls/vault.keyFor production, replace this with a cert signed by your internal CA (e.g., Step CA or ADCS) or a public CA if Vault is internet-exposed.
Step 2: Write the Vault Configuration File
Create the main Vault configuration. This enables the file storage backend, TLS, and the UI:
cat > ~/vault/config/vault.hcl << 'EOF'
ui = true
log_level = "info"
log_file = "/vault/logs/vault.log"
log_rotate_max_files = 7
storage "file" {
path = "/vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/vault/tls/vault.crt"
tls_key_file = "/vault/tls/vault.key"
}
api_addr = "https://vault.internal:8200"
cluster_addr = "https://vault.internal:8201"
EOFProduction note: For HA deployments, replace the
filebackend withraft(integrated storage) or an external backend like Consul. The file backend does not support high availability.
Step 3: Deploy Vault with Docker Compose
Create the Compose file:
# ~/vault/docker-compose.yml
services:
vault:
image: hashicorp/vault:1.17
container_name: vault
restart: unless-stopped
cap_add:
- IPC_LOCK # Prevents secrets from being swapped to disk
ports:
- "8200:8200"
- "8201:8201"
volumes:
- ./config:/vault/config:ro
- ./data:/vault/data
- ./tls:/vault/tls:ro
- ./logs:/vault/logs
environment:
VAULT_ADDR: "https://127.0.0.1:8200"
VAULT_CACERT: "/vault/tls/vault.crt"
command: vault server -config=/vault/config/vault.hcl
networks:
- vault-net
networks:
vault-net:
driver: bridgeStart Vault:
cd ~/vault
docker compose up -d
docker compose logs -f vaultWait for the log line: core: vault is unsealed.
Step 4: Initialize and Unseal Vault
Vault starts in a sealed state. Initialization generates the master key (split via Shamir's Secret Sharing) and the initial root token:
# Export these so the CLI works
export VAULT_ADDR="https://127.0.0.1:8200"
export VAULT_CACERT=~/vault/tls/vault.crt
# Initialize with 5 key shares, requiring 3 to unseal
docker exec vault vault operator init \
-key-shares=5 \
-key-threshold=3 \
-format=json | tee ~/vault-init.json
chmod 600 ~/vault-init.jsonThe output contains 5 unseal keys and the root token. Store these securely — in a password manager or an HSM. Never leave them on disk in plaintext.
Unseal Vault using 3 of the 5 keys:
# Extract keys for scripting (in practice, enter these manually)
KEY1=$(jq -r '.unseal_keys_b64[0]' ~/vault-init.json)
KEY2=$(jq -r '.unseal_keys_b64[1]' ~/vault-init.json)
KEY3=$(jq -r '.unseal_keys_b64[2]' ~/vault-init.json)
docker exec vault vault operator unseal "$KEY1"
docker exec vault vault operator unseal "$KEY2"
docker exec vault vault operator unseal "$KEY3"
# Verify sealed status = false
docker exec -e VAULT_ADDR=https://127.0.0.1:8200 \
-e VAULT_CACERT=/vault/tls/vault.crt \
vault vault statusAuthenticate with the root token for initial setup:
ROOT_TOKEN=$(jq -r '.root_token' ~/vault-init.json)
export VAULT_TOKEN="$ROOT_TOKEN"Step 5: Enable the KV v2 Secrets Engine
KV (Key-Value) v2 provides versioned secret storage — essential for tracking secret history and enabling rollback:
# Enable KV v2 at the path "secret/"
vault secrets enable -path=secret kv-v2
# Write a test secret
vault kv put secret/myapp/config \
db_password="S3cur3P@ss!" \
api_key="sk-abc123" \
environment="production"
# Read it back
vault kv get secret/myapp/config
# Read a specific field only
vault kv get -field=db_password secret/myapp/configUpdate a secret (creates a new version, old version is preserved):
vault kv patch secret/myapp/config api_key="sk-newkey456"
# List all versions
vault kv metadata get secret/myapp/config
# Roll back to version 1
vault kv rollback -version=1 secret/myapp/configStep 6: Configure Dynamic PostgreSQL Credentials
Dynamic secrets are Vault's killer feature: instead of sharing a static password, Vault creates a unique, time-limited credential for each consumer on demand. When the lease expires, Vault automatically revokes it.
First, prepare PostgreSQL with a Vault role:
-- Run on your PostgreSQL instance
CREATE ROLE vault WITH SUPERUSER LOGIN PASSWORD 'vault-admin-pass';
-- Or a least-privilege role for creating/dropping users
CREATE ROLE vault NOINHERIT LOGIN PASSWORD 'vault-admin-pass';
GRANT CREATE ROLE TO vault;Now configure the Vault database secrets engine:
# Enable the database secrets engine
vault secrets enable database
# Configure the PostgreSQL connection
vault write database/config/myapp-db \
plugin_name=postgresql-database-plugin \
connection_url="postgresql://{{username}}:{{password}}@postgres-host:5432/myapp?sslmode=require" \
allowed_roles="myapp-readonly,myapp-readwrite" \
username="vault" \
password="vault-admin-pass"
# Create a read-only role (TTL: 1 hour, max 24 hours)
vault write database/roles/myapp-readonly \
db_name=myapp-db \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Generate a dynamic credential on demand
vault read database/creds/myapp-readonlyThe output provides a unique username and password. When the TTL expires or vault lease revoke is called, the PostgreSQL user is automatically dropped.
Step 7: Set Up AppRole Authentication
AppRole is the standard auth method for machine-to-machine (application) authentication. An application authenticates with a Role ID (semi-public identifier) and a Secret ID (short-lived credential):
# Enable AppRole
vault auth enable approle
# Create a named role for your application
vault write auth/approle/role/myapp \
secret_id_ttl="30m" \
token_ttl="1h" \
token_max_ttl="4h" \
secret_id_num_uses=1 \
token_policies="myapp-policy"
# Retrieve the Role ID (deploy this with your app)
vault read auth/approle/role/myapp/role-id
# Generate a Secret ID (fetch this at deploy time, never store statically)
vault write -f auth/approle/role/myapp/secret-idIn your application startup logic:
#!/usr/bin/env bash
# Example: application bootstrap script
VAULT_ADDR="https://vault.internal:8200"
ROLE_ID="<role-id-from-vault>"
SECRET_ID="<secret-id-fetched-at-deploy>"
# Exchange credentials for a Vault token
TOKEN=$(curl -sk \
--request POST \
--data "{\"role_id\":\"${ROLE_ID}\",\"secret_id\":\"${SECRET_ID}\"}" \
"${VAULT_ADDR}/v1/auth/approle/login" | jq -r '.auth.client_token')
# Read the secret using that token
DB_PASS=$(curl -sk \
--header "X-Vault-Token: ${TOKEN}" \
"${VAULT_ADDR}/v1/secret/data/myapp/config" | jq -r '.data.data.db_password')
export DB_PASSWORD="$DB_PASS"Step 8: Write Vault Policies
Vault policies use HCL and follow a deny by default model. Only explicitly granted capabilities are allowed:
cat > /tmp/myapp-policy.hcl << 'EOF'
# Read-only access to app secrets
path "secret/data/myapp/*" {
capabilities = ["read"]
}
# Allow listing secret names (not values)
path "secret/metadata/myapp/*" {
capabilities = ["list"]
}
# Allow requesting dynamic DB credentials
path "database/creds/myapp-readonly" {
capabilities = ["read"]
}
# Allow renewing its own token and leases
path "auth/token/renew-self" {
capabilities = ["update"]
}
path "sys/leases/renew" {
capabilities = ["update"]
}
EOF
# Write the policy to Vault
vault policy write myapp-policy /tmp/myapp-policy.hcl
# Verify
vault policy read myapp-policyCreate a stricter admin policy for operators:
cat > /tmp/admin-policy.hcl << 'EOF'
# Full access to secrets engine management
path "secret/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "database/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Auth method management
path "auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# Policy management
path "sys/policies/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Audit log access
path "sys/audit" {
capabilities = ["read"]
}
EOF
vault policy write admin-policy /tmp/admin-policy.hclStep 9: Enable Audit Logging
Audit logs are critical for compliance — every request and response is logged (with secrets redacted):
# Enable file audit backend
vault audit enable file file_path=/vault/logs/audit.log
# Verify audit is active
vault audit list -detailed
# Tail the audit log from the host
tail -f ~/vault/logs/audit.log | jq .A typical audit log entry looks like:
{
"time": "2026-03-26T14:22:01.234Z",
"type": "response",
"auth": {
"client_token": "hmac-sha256:...",
"accessor": "hmac-sha256:...",
"display_name": "approle-myapp",
"policies": ["myapp-policy"]
},
"request": {
"operation": "read",
"path": "secret/data/myapp/config"
},
"response": {
"data": {
"db_password": "hmac-sha256:..."
}
}
}Note that secret values are HMAC-hashed in the audit log, not stored in plaintext.
Step 10: Configure Automatic Unsealing (Optional)
Vault must be manually unsealed after every restart, which is operationally painful. Auto-unseal integrates with an external KMS to perform unsealing automatically. With AWS KMS:
# Add to vault.hcl
seal "awskms" {
region = "us-east-1"
kms_key_id = "arn:aws:kms:us-east-1:123456789:key/your-key-id"
}For self-hosted environments, HashiCorp's Transit Auto-Unseal uses a separate Vault instance (a "transit Vault") as the KMS — useful in homelabs or air-gapped environments.
Verification and Testing
Run through these checks to confirm everything is working:
# 1. Check overall Vault status
vault status
# 2. Confirm KV engine is mounted
vault secrets list
# 3. Confirm auth methods
vault auth list
# 4. Test reading a secret (as root, then as an AppRole token)
vault kv get secret/myapp/config
# 5. Generate a dynamic DB credential and verify it in PostgreSQL
CREDS=$(vault read -format=json database/creds/myapp-readonly)
DB_USER=$(echo "$CREDS" | jq -r '.data.username')
DB_PASS=$(echo "$CREDS" | jq -r '.data.password')
echo "Generated user: $DB_USER"
# Connect to PostgreSQL with the generated credential
psql -h postgres-host -U "$DB_USER" -d myapp -c "\du $DB_USER"
# 6. Test AppRole login flow
ROLE_ID=$(vault read -field=role_id auth/approle/role/myapp/role-id)
SECRET_ID=$(vault write -field=secret_id -f auth/approle/role/myapp/secret-id)
APP_TOKEN=$(vault write -field=token auth/approle/login \
role_id="$ROLE_ID" secret_id="$SECRET_ID")
# 7. Confirm the AppRole token can read the secret but not write
VAULT_TOKEN="$APP_TOKEN" vault kv get secret/myapp/config # should succeed
VAULT_TOKEN="$APP_TOKEN" vault kv put secret/myapp/config foo=bar # should fail with 403Troubleshooting
Vault is sealed after restart
Vault always starts sealed. Either run vault operator unseal with 3 key shares or configure auto-unseal. Add the unseal commands to a startup script that reads keys from a secure location (an HSM or another Vault instance).
"permission denied" errors
Check the token's policies with vault token lookup and compare against vault policy read <policy-name>. Remember that the root token bypasses all policies — never use it for application workloads. Create scoped policies and tokens instead.
Database dynamic secrets fail to generate
Verify the Vault container can reach the PostgreSQL host: docker exec vault nc -zv postgres-host 5432. Also check that the Vault database user has CREATE ROLE privileges on PostgreSQL. Run vault read database/config/myapp-db to confirm the connection config was written correctly.
TLS certificate errors
If clients cannot connect, ensure VAULT_CACERT points to the Vault server's CA certificate. For Docker-internal clients, mount the cert into the container. For curl testing, use -k or --cacert vault.crt.
Audit log is blocking Vault
By design, if an audit backend becomes unavailable, Vault blocks all requests (fail-secure). If the disk is full or the audit file is unreachable, Vault will stop responding. Monitor disk usage on /vault/logs and rotate logs regularly.
Secret ID was consumed
AppRole Secret IDs with secret_id_num_uses=1 are single-use. If your application restarts before the token expires, you need a fresh Secret ID. Implement a Vault Agent sidecar to handle token renewal and Secret ID lifecycle automatically:
# Run Vault Agent alongside your app
docker exec vault vault agent -config=/path/to/agent.hclSummary
You now have a production-grade HashiCorp Vault deployment with:
| Capability | Implementation |
|---|---|
| Encrypted at rest | File backend with IPC_LOCK (no swap) |
| TLS everywhere | Self-signed cert (replace with CA-signed in production) |
| Static secrets | KV v2 with versioning and rollback |
| Dynamic credentials | PostgreSQL dynamic secrets with auto-revocation |
| App authentication | AppRole with short-lived Secret IDs |
| Least-privilege | HCL policies with deny-by-default |
| Full audit trail | File audit backend with HMAC-redacted secrets |
The pattern to internalize: nothing in your infrastructure should hold a long-lived secret. Applications authenticate with short-lived tokens, receive ephemeral credentials, and Vault revokes everything automatically. If a credential leaks, it's already expired.
Next steps to consider:
- Deploy Vault Agent as a sidecar to handle token renewal transparently
- Integrate Kubernetes auth method (
vault auth enable kubernetes) for pod-level secret injection via the Vault Secrets Operator - Enable the PKI secrets engine to replace static TLS certificates with Vault-issued short-lived certs across your infrastructure
- Set up Vault Enterprise replication or Raft HA for fault tolerance in critical environments