Production Deployment
Scaling, queue workers, database-backed storage, backups, and high availability — make your n8n instance bulletproof.
Completed Part 5, running n8n instance
~50 minutes
Production-grade n8n with scaling, backups, and HA
From Side Project to Production Infrastructure
Over the first five parts of this series, you've built an n8n instance that handles monitoring, AI processing, DevOps automation, and multi-service integrations. If your business depends on these workflows running reliably, it's time to harden the deployment.
This final guide covers everything you need to run n8n in production with confidence: moving from SQLite to PostgreSQL, configuring queue mode for parallel processing, automated backups, health monitoring, and planning for high availability.
Step 1: Move to PostgreSQL for Execution Storage
By default, n8n stores workflow executions in SQLite. This works fine for testing, but SQLite doesn't handle concurrent writes well, is harder to back up while running, and doesn't support execution analytics queries.
Add PostgreSQL to Your Stack
version: "3.8"
services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: your-strong-postgres-password
volumes:
- ./postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./caddy_data:/data
- ./caddy_config:/config
networks:
- n8n-network
n8n:
image: docker.n8n.io/n8nio/n8n
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- N8N_HOST=n8n.yourdomain.com
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com/
- N8N_ENCRYPTION_KEY=your-strong-encryption-key
- GENERIC_TIMEZONE=America/New_York
- N8N_SECURE_COOKIE=true
# PostgreSQL configuration
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your-strong-postgres-password
# Execution settings
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
- EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168
volumes:
- ./n8n_data:/home/node/.n8n
networks:
- n8n-network
networks:
n8n-network:
driver: bridgeKey configuration:
- EXECUTIONS_DATA_PRUNE / MAX_AGE=168 — Auto-delete execution data older than 7 days. Without pruning, the database grows indefinitely.
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=all — Stores successful execution data. Set to
nonefor high-volume workflows to save disk space.
Migrate Existing Data
n8n handles migration automatically when it starts with the new database configuration:
mkdir -p postgres_data
docker compose down
docker compose up -ddocker compose logs -f n8nStep 2: Enable Queue Mode
By default, n8n runs everything in the main process. Queue mode separates the UI/webhook handling from workflow execution using Redis as a message broker and separate worker processes.
Add Redis and Workers
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --requirepass your-redis-password
volumes:
- ./redis_data:/data
networks:
- n8n-network
healthcheck:
test: ["CMD", "redis-cli", "-a", "your-redis-password", "ping"]
interval: 10s
timeout: 5s
retries: 5
n8n:
# ... all previous config, plus:
environment:
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- QUEUE_BULL_REDIS_PASSWORD=your-redis-password
- QUEUE_HEALTH_CHECK_ACTIVE=true
n8n-worker:
image: docker.n8n.io/n8nio/n8n
restart: unless-stopped
depends_on:
- n8n
extra_hosts:
- "host.docker.internal:host-gateway"
command: worker
environment:
- N8N_ENCRYPTION_KEY=your-strong-encryption-key
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your-strong-postgres-password
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- QUEUE_BULL_REDIS_PASSWORD=your-redis-password
- GENERIC_TIMEZONE=America/New_York
volumes:
- ./n8n_data:/home/node/.n8n
networks:
- n8n-networkScaling Workers
docker compose up -d --scale n8n-worker=3This spins up 3 worker processes, each pulling jobs from Redis independently. On a 4 GB VPS, 2–3 workers is a good starting point. If running Ollama alongside n8n, keep workers at 1–2.
Step 3: Automated Backups
Your n8n instance now has data across PostgreSQL, Redis, and the n8n data directory. All three need backup.
Create the Backup Script
#!/bin/bash
set -euo pipefail
BACKUP_DIR="/backups/n8n"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=30
COMPOSE_DIR="$HOME/n8n-docker"
mkdir -p "$BACKUP_DIR"
echo "[$(date)] Starting n8n backup..."
# 1. PostgreSQL dump
docker compose -f "$COMPOSE_DIR/docker-compose.yml" exec -T postgres \
pg_dump -U n8n -Fc n8n > "$BACKUP_DIR/postgres_$TIMESTAMP.dump"
# 2. n8n data directory
tar -czf "$BACKUP_DIR/n8n_data_$TIMESTAMP.tar.gz" \
-C "$COMPOSE_DIR" n8n_data/
# 3. Docker Compose configuration
tar -czf "$BACKUP_DIR/config_$TIMESTAMP.tar.gz" \
-C "$COMPOSE_DIR" docker-compose.yml Caddyfile
# 4. Combined archive
tar -czf "$BACKUP_DIR/n8n_full_$TIMESTAMP.tar.gz" \
"$BACKUP_DIR/postgres_$TIMESTAMP.dump" \
"$BACKUP_DIR/n8n_data_$TIMESTAMP.tar.gz" \
"$BACKUP_DIR/config_$TIMESTAMP.tar.gz"
# 5. Clean up individual files
rm -f "$BACKUP_DIR/postgres_$TIMESTAMP.dump" \
"$BACKUP_DIR/n8n_data_$TIMESTAMP.tar.gz" \
"$BACKUP_DIR/config_$TIMESTAMP.tar.gz"
# 6. Prune old backups
find "$BACKUP_DIR" -name "n8n_full_*.tar.gz" -mtime +$RETENTION_DAYS -delete
# 7. Verify
tar -tzf "$BACKUP_DIR/n8n_full_$TIMESTAMP.tar.gz" > /dev/null 2>&1
if [ $? -eq 0 ]; then
SIZE=$(du -h "$BACKUP_DIR/n8n_full_$TIMESTAMP.tar.gz" | cut -f1)
echo "[$(date)] Backup complete: n8n_full_$TIMESTAMP.tar.gz ($SIZE)"
else
echo "[$(date)] ERROR: Backup verification failed!"
exit 1
fichmod +x ~/n8n-docker/backup.shSchedule with Cron
crontab -e0 3 * * * /root/n8n-docker/backup.sh >> /var/log/n8n-backup.log 2>&1Off-Site Backup
# Add to the end of backup.sh
rclone copy "$BACKUP_DIR/n8n_full_$TIMESTAMP.tar.gz" \
remote:n8n-backups/ --progressRestore Procedure
# Extract the backup
tar -xzf n8n_full_20250615_030000.tar.gz -C /tmp/restore/
# Restore PostgreSQL
docker compose exec -T postgres pg_restore -U n8n -d n8n --clean \
< /tmp/restore/backups/n8n/postgres_20250615_030000.dump
# Restore n8n data
docker compose down
tar -xzf /tmp/restore/backups/n8n/n8n_data_20250615_030000.tar.gz -C ~/n8n-docker/
docker compose up -dStep 4: Health Monitoring
n8n's Built-in Health Check
curl -s https://n8n.yourdomain.com/healthzReturns {"status":"ok"} when healthy.
External Monitoring
Don't rely solely on n8n to monitor itself. Use an external check from a different server:
*/5 * * * * curl -sf https://n8n.yourdomain.com/healthz > /dev/null || \
curl -X POST https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
-d '{"text":"🔴 n8n is DOWN!"}'Queue Monitoring
Monitor queue depth to know when to scale workers:
docker exec $(docker ps -qf "name=redis") redis-cli -a your-redis-password llen bull:jobs:waitIf the queue depth grows consistently, it's time to scale workers or upgrade your VPS.
Step 5: Security Hardening
IP Restrictions
n8n.yourdomain.com {
@blocked not remote_ip 203.0.113.10 198.51.100.0/24
respond @blocked 403
reverse_proxy n8n:5678
}HTTP Basic Auth
n8n.yourdomain.com {
basicauth {
admin $2a$14$hashed_password_here
}
reverse_proxy n8n:5678
}Generate the hashed password:
docker run --rm caddy:2-alpine caddy hash-password --plaintext 'your-password'Webhook Path Isolation
environment:
- N8N_PATH=/app/
- WEBHOOK_URL=https://n8n.yourdomain.com/Serves UI at /app/ while webhooks remain at the root, allowing different security rules per path.
Environment Variable Security
Never put secrets directly in docker-compose.yml. Use an .env file:
N8N_ENCRYPTION_KEY=generated-key-here
POSTGRES_PASSWORD=generated-password-here
REDIS_PASSWORD=generated-password-herechmod 600 ~/n8n-docker/.envenvironment:
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}Step 6: Update Strategy
cd ~/n8n-docker
# 1. Backup first
./backup.sh
# 2. Pull the new image
docker compose pull n8n n8n-worker
# 3. Restart with the new version
docker compose up -d
# 4. Watch logs for migration errors
docker compose logs -f n8n
# 5. Verify health
curl -s https://n8n.yourdomain.com/healthzIf anything goes wrong, roll back from backup. Pin to a specific version in production:
n8n:
image: docker.n8n.io/n8nio/n8n:1.70.3Step 7: Performance Tuning
Execution Data Pruning
For high-volume workflows (like monitoring every 5 min), set Save Successful Executions: None and Save Failed Executions: All in each workflow's settings.
PostgreSQL Tuning
For a 4 GB VPS running PostgreSQL alongside n8n:
shared_buffers = 256MB
effective_cache_size = 1GB
work_mem = 16MB
maintenance_work_mem = 128MB
max_connections = 50 postgres:
image: postgres:16-alpine
volumes:
- ./postgres_data:/var/lib/postgresql/data
- ./postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.confResource Limits
n8n:
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
n8n-worker:
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
postgres:
deploy:
resources:
limits:
memory: 512MThe Complete Production Stack
Internet
→ Caddy (SSL termination, reverse proxy, IP filtering)
→ n8n main (UI, webhook reception, scheduling)
→ Redis (job queue)
→ n8n worker(s) (workflow execution)
→ PostgreSQL (workflows, credentials, executions)
→ Ollama (local LLM — from Part 3)
External services:
← GitHub webhooks
← Email (IMAP polling)
→ Slack notifications
→ Database writes
→ API integrationsAll of this runs on a single RamNode VPS. A 4 GB plan handles this comfortably for moderate workloads. Step up to 8 GB for heavy AI processing or high-volume integrations.
Series Complete
Over six parts, you've gone from zero to a production-grade automation platform:
- 1. Installation — Docker, Caddy, SSL, first workflow
- 2. Core Patterns — Webhooks, scheduling, conditionals, loops, error handling
- 3. AI Workflows — Local LLM processing with Ollama at zero API cost
- 4. DevOps — Server monitoring, deployment triggers, log analysis
- 5. Integrations — GitHub, Slack, email, databases, REST APIs
- 6. Production — PostgreSQL, queue workers, backups, security, scaling
The total cost: a $4–8/month RamNode VPS. Compare that to Zapier Team ($103.50/month with 2,000 task limits). You own the platform. You own the data. You pay a flat rate for unlimited automation.
