Databases are the foundation of nearly every application you'll host. Get them wrong and you'll face data loss, performance bottlenecks, and 3 AM emergencies. Get them right and they'll run for years without drama.
This guide uses Claude Code to generate production-ready database configurations, management scripts, and monitoring setups. We'll cover PostgreSQL, MySQL/MariaDB, Redis, connection pooling, migrations, and the operational scripts that keep everything running smoothly.
Prerequisites
- Claude Code installed (see Part 1)
- Docker and Docker Compose
- Basic understanding of SQL databases
PostgreSQL — The Workhorse
PostgreSQL handles most workloads exceptionally well. Let's deploy it properly:
"Create a production PostgreSQL deployment with Docker Compose setup with proper volumes, tuned postgresql.conf for a 4-core, 8GB RAM server, pg_hba.conf with secure authentication, automated backup script with retention, replication setup for a standby server, monitoring queries for performance analysis, and connection pooling with PgBouncer."
services:
postgres:
image: postgres:16-alpine
container_name: postgres
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?Set POSTGRES_PASSWORD in .env}
POSTGRES_DB: ${POSTGRES_DB:-app}
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --locale=en_US.UTF-8"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./config/postgresql.conf:/etc/postgresql/postgresql.conf:ro
- ./config/pg_hba.conf:/etc/postgresql/pg_hba.conf:ro
- ./backups:/backups
command: postgres -c config_file=/etc/postgresql/postgresql.conf -c hba_file=/etc/postgresql/pg_hba.conf
ports:
- "127.0.0.1:5432:5432" # Only localhost by default
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
interval: 10s
timeout: 5s
retries: 5
shm_size: 256mb # Needed for parallel queries
networks:
- database
pgbouncer:
image: edoburu/pgbouncer:1.21.0
container_name: pgbouncer
restart: unless-stopped
environment:
DATABASE_URL: postgres://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-app}
POOL_MODE: transaction
MAX_CLIENT_CONN: 1000
DEFAULT_POOL_SIZE: 20
MIN_POOL_SIZE: 5
RESERVE_POOL_SIZE: 5
RESERVE_POOL_TIMEOUT: 3
SERVER_RESET_QUERY: DISCARD ALL
SERVER_CHECK_QUERY: SELECT 1
SERVER_CHECK_DELAY: 30
AUTH_TYPE: scram-sha-256
ports:
- "127.0.0.1:6432:5432"
depends_on:
postgres:
condition: service_healthy
networks:
- database
postgres-exporter:
image: prometheuscommunity/postgres-exporter:v0.15.0
container_name: postgres-exporter
restart: unless-stopped
environment:
DATA_SOURCE_NAME: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-app}?sslmode=disable
ports:
- "127.0.0.1:9187:9187"
depends_on:
postgres:
condition: service_healthy
networks:
- database
volumes:
postgres_data:
networks:
database:
driver: bridgePostgreSQL Configuration
# =============================================================================
# PostgreSQL Configuration - Tuned for 4 cores, 8GB RAM
# =============================================================================
# Connection Settings
listen_addresses = '*'
port = 5432
max_connections = 200
unix_socket_directories = '/var/run/postgresql'
# Memory Settings
shared_buffers = 2GB # 25% of RAM
work_mem = 64MB # Per-operation memory
maintenance_work_mem = 512MB # For VACUUM, CREATE INDEX
effective_cache_size = 6GB # 50-75% of RAM
# Write-Ahead Log (WAL)
wal_level = replica # Needed for replication
max_wal_size = 2GB
min_wal_size = 512MB
wal_buffers = 64MB
checkpoint_completion_target = 0.9
checkpoint_timeout = 10min
# Replication
max_wal_senders = 5
max_replication_slots = 5
hot_standby = on
hot_standby_feedback = on
# Query Planner
random_page_cost = 1.1 # Lower for SSD
effective_io_concurrency = 200 # Higher for SSD
default_statistics_target = 100
# Parallel query settings
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
parallel_leader_participation = on
# Logging
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_min_duration_statement = 1000 # Log queries slower than 1 second
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
# Autovacuum
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 50
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_threshold = 50
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_delay = 2ms
autovacuum_vacuum_cost_limit = 1000
# Security
password_encryption = scram-sha-256
ssl = off # Enable if exposing externally
# Extensions
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = allBackup Script
#!/bin/bash
set -euo pipefail
# PostgreSQL Backup Script
BACKUP_DIR="${BACKUP_DIR:-/backups}"
RETENTION_DAYS="${RETENTION_DAYS:-7}"
PGHOST="${PGHOST:-localhost}"
PGPORT="${PGPORT:-5432}"
PGUSER="${PGUSER:-postgres}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
log() { echo "[$(date '+%H:%M:%S')] $*"; }
mkdir -p "$BACKUP_DIR"/{daily,weekly,monthly}
# Backup all databases
backup_all() {
log "Starting full backup..."
local backup_file="$BACKUP_DIR/daily/all_databases_$TIMESTAMP.sql.gz"
pg_dumpall -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" --clean --if-exists | \
gzip -9 > "$backup_file"
local size=$(du -h "$backup_file" | cut -f1)
log "Full backup complete: $backup_file ($size)"
}
# Backup individual database with custom format
backup_database() {
local db="$1"
local backup_file="$BACKUP_DIR/daily/${db}_$TIMESTAMP.dump"
log "Backing up database: $db"
pg_dump -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" \
--format=custom --compress=9 --verbose \
--file="$backup_file" "$db"
log "Database backup complete: $backup_file"
}
# Rotate backups
rotate_backups() {
log "Rotating backups..."
find "$BACKUP_DIR/daily" -type f -mtime +$RETENTION_DAYS -delete
# Weekly backup on Sundays
if [[ $(date +%u) -eq 7 ]]; then
local latest=$(ls -t "$BACKUP_DIR/daily"/*.dump 2>/dev/null | head -1)
[[ -n "$latest" ]] && cp "$latest" "$BACKUP_DIR/weekly/"
fi
find "$BACKUP_DIR/weekly" -type f -mtime +28 -delete
# Monthly backup on 1st
if [[ $(date +%d) -eq 01 ]]; then
local latest=$(ls -t "$BACKUP_DIR/daily"/*.dump 2>/dev/null | head -1)
[[ -n "$latest" ]] && cp "$latest" "$BACKUP_DIR/monthly/"
fi
find "$BACKUP_DIR/monthly" -type f -mtime +365 -delete
}
case "${1:-all}" in
all) backup_all; rotate_backups ;;
database) backup_database "${2:?Database name required}"; rotate_backups ;;
*) echo "Usage: $0 {all|database <name>}" ;;
esacPostgreSQL Replication
Set up streaming replication for high availability:
services:
postgres-standby:
image: postgres:16-alpine
container_name: postgres-standby
restart: unless-stopped
environment:
PGUSER: replicator
PGPASSWORD: ${REPLICATION_PASSWORD:?Set REPLICATION_PASSWORD}
volumes:
- postgres_standby_data:/var/lib/postgresql/data
- ./config/postgresql.standby.conf:/etc/postgresql/postgresql.conf:ro
command: |
bash -c '
if [ ! -f /var/lib/postgresql/data/PG_VERSION ]; then
echo "Initializing standby from primary..."
pg_basebackup -h ${PRIMARY_HOST:-postgres} -p 5432 -U replicator -D /var/lib/postgresql/data -Fp -Xs -P -R
fi
exec postgres -c config_file=/etc/postgresql/postgresql.conf
'
ports:
- "127.0.0.1:5433:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
- database
volumes:
postgres_standby_data:
networks:
database:
external: trueReplication Setup Script
#!/bin/bash
set -euo pipefail
PRIMARY_HOST="${PRIMARY_HOST:-localhost}"
PRIMARY_PORT="${PRIMARY_PORT:-5432}"
PGUSER="${PGUSER:-postgres}"
REPLICATION_USER="${REPLICATION_USER:-replicator}"
REPLICATION_PASSWORD="${REPLICATION_PASSWORD:?Set REPLICATION_PASSWORD}"
log() { echo "[$(date '+%H:%M:%S')] $*"; }
# Create replication user on primary
setup_primary() {
log "Setting up primary for replication..."
psql -h "$PRIMARY_HOST" -p "$PRIMARY_PORT" -U "$PGUSER" -d postgres << EOF
DO \$\$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$REPLICATION_USER') THEN
CREATE ROLE $REPLICATION_USER WITH REPLICATION LOGIN PASSWORD '$REPLICATION_PASSWORD';
END IF;
END
\$\$;
SELECT pg_create_physical_replication_slot('standby_slot', true)
WHERE NOT EXISTS (
SELECT FROM pg_replication_slots WHERE slot_name = 'standby_slot'
);
EOF
log "Primary configured for replication"
}
# Check replication status
check_status() {
log "Checking replication status..."
echo "=== Primary Status ==="
psql -h "$PRIMARY_HOST" -p "$PRIMARY_PORT" -U "$PGUSER" -d postgres -c "
SELECT client_addr, state, sent_lsn, replay_lsn,
pg_wal_lsn_diff(sent_lsn, replay_lsn) AS lag_bytes
FROM pg_stat_replication;
"
}
case "${1:-status}" in
setup) setup_primary ;;
status) check_status ;;
*) echo "Usage: $0 {setup|status}" ;;
esacPostgreSQL Monitoring Queries
Essential queries for understanding database health:
#!/bin/bash
set -uo pipefail
PGHOST="${PGHOST:-localhost}"
PGPORT="${PGPORT:-5432}"
PGUSER="${PGUSER:-postgres}"
PGDATABASE="${PGDATABASE:-postgres}"
section() { echo -e "\n\033[1m=== $* ===\033[0m\n"; }
run_query() {
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" -c "$1"
}
# Connection Statistics
connections() {
section "Connection Statistics"
run_query "
SELECT datname as database, count(*) as total,
count(*) FILTER (WHERE state = 'active') as active,
count(*) FILTER (WHERE state = 'idle') as idle
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
GROUP BY datname ORDER BY total DESC;
"
}
# Active Queries
active_queries() {
section "Active Queries (Running > 5 seconds)"
run_query "
SELECT pid, now() - query_start as duration, usename as user,
state, left(query, 80) as query
FROM pg_stat_activity
WHERE state = 'active' AND pid <> pg_backend_pid()
AND query_start < now() - interval '5 seconds'
ORDER BY duration DESC LIMIT 10;
"
}
# Table Statistics
table_stats() {
section "Table Statistics (Largest Tables)"
run_query "
SELECT schemaname || '.' || relname as table,
pg_size_pretty(pg_total_relation_size(relid)) as total_size,
n_live_tup as rows, n_dead_tup as dead_rows,
CASE WHEN n_live_tup > 0
THEN round(100.0 * n_dead_tup / n_live_tup, 2) ELSE 0
END as dead_ratio
FROM pg_stat_user_tables
ORDER BY pg_total_relation_size(relid) DESC LIMIT 15;
"
}
# Cache Hit Ratio
cache_stats() {
section "Cache Statistics"
run_query "
SELECT 'Table Cache' as type, sum(heap_blks_hit) as hits,
sum(heap_blks_read) as reads,
CASE WHEN sum(heap_blks_hit) + sum(heap_blks_read) > 0
THEN round(100.0 * sum(heap_blks_hit) /
(sum(heap_blks_hit) + sum(heap_blks_read)), 2)
ELSE 100 END as hit_ratio
FROM pg_statio_user_tables;
"
echo "Target: > 99% hit ratio"
}
case "${1:-all}" in
connections) connections ;;
active) active_queries ;;
tables) table_stats ;;
cache) cache_stats ;;
all) connections; cache_stats; active_queries; table_stats ;;
*) echo "Usage: $0 {all|connections|active|tables|cache}" ;;
esacMySQL/MariaDB Deployment
For applications that require MySQL compatibility:
services:
mysql:
image: mariadb:11
container_name: mysql
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:?Set MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE:-app}
MYSQL_USER: ${MYSQL_USER:-app}
MYSQL_PASSWORD: ${MYSQL_PASSWORD:?Set MYSQL_PASSWORD}
volumes:
- mysql_data:/var/lib/mysql
- ./config/mysql.cnf:/etc/mysql/conf.d/custom.cnf:ro
- ./backups:/backups
ports:
- "127.0.0.1:3306:3306"
command: >
--character-set-server=utf8mb4
--collation-server=utf8mb4_unicode_ci
--innodb-buffer-pool-size=2G
--innodb-log-file-size=256M
--innodb-flush-log-at-trx-commit=2
--innodb-flush-method=O_DIRECT
--max-connections=200
--slow-query-log=ON
--slow-query-log-file=/var/lib/mysql/slow.log
--long-query-time=1
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 5
networks:
- database
mysql-exporter:
image: prom/mysqld-exporter:v0.15.1
container_name: mysql-exporter
restart: unless-stopped
environment:
MYSQLD_EXPORTER_PASSWORD: ${MYSQL_ROOT_PASSWORD}
command:
- --mysqld.address=mysql:3306
- --mysqld.username=root
ports:
- "127.0.0.1:9104:9104"
depends_on:
mysql:
condition: service_healthy
networks:
- database
volumes:
mysql_data:
networks:
database:
driver: bridgeMySQL Backup Script
#!/bin/bash
set -euo pipefail
BACKUP_DIR="${BACKUP_DIR:-/backups}"
RETENTION_DAYS="${RETENTION_DAYS:-7}"
MYSQL_HOST="${MYSQL_HOST:-localhost}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_PASSWORD="${MYSQL_PASSWORD:?Set MYSQL_PASSWORD}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
log() { echo "[$(date '+%H:%M:%S')] $*"; }
mkdir -p "$BACKUP_DIR/daily"
backup_all() {
local backup_file="$BACKUP_DIR/daily/all_databases_$TIMESTAMP.sql.gz"
log "Starting full backup..."
mysqldump -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" \
--all-databases --single-transaction --routines --triggers --events \
--quick --lock-tables=false | gzip -9 > "$backup_file"
local size=$(du -h "$backup_file" | cut -f1)
log "Backup complete: $backup_file ($size)"
}
rotate() {
find "$BACKUP_DIR/daily" -type f -mtime +$RETENTION_DAYS -delete
log "Rotated backups older than $RETENTION_DAYS days"
}
case "${1:-all}" in
all) backup_all; rotate ;;
*) echo "Usage: $0 {all}" ;;
esacRedis for Caching & Sessions
Redis deployment with persistence and monitoring:
services:
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
command: >
redis-server
--appendonly yes
--appendfsync everysec
--maxmemory 1gb
--maxmemory-policy allkeys-lru
--tcp-backlog 511
--timeout 0
--tcp-keepalive 300
--databases 16
--save 900 1
--save 300 10
--save 60 10000
--stop-writes-on-bgsave-error yes
--rdbcompression yes
--requirepass ${REDIS_PASSWORD:?Set REDIS_PASSWORD}
volumes:
- redis_data:/data
ports:
- "127.0.0.1:6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
sysctls:
net.core.somaxconn: 1024
networks:
- database
redis-exporter:
image: oliver006/redis_exporter:v1.55.0
container_name: redis-exporter
restart: unless-stopped
environment:
REDIS_ADDR: redis://redis:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
ports:
- "127.0.0.1:9121:9121"
depends_on:
redis:
condition: service_healthy
networks:
- database
volumes:
redis_data:
networks:
database:
driver: bridgeRedis Monitoring Script
#!/bin/bash
set -uo pipefail
REDIS_HOST="${REDIS_HOST:-localhost}"
REDIS_PORT="${REDIS_PORT:-6379}"
REDIS_PASSWORD="${REDIS_PASSWORD:-}"
redis_cmd() {
if [[ -n "$REDIS_PASSWORD" ]]; then
redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" -a "$REDIS_PASSWORD" "$@" 2>/dev/null
else
redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" "$@"
fi
}
section() { echo -e "\n\033[1m=== $* ===\033[0m\n"; }
memory_info() {
section "Memory Usage"
redis_cmd INFO memory | grep -E "used_memory_human|used_memory_peak_human|maxmemory_human|mem_fragmentation_ratio"
}
stats() {
section "Statistics"
redis_cmd INFO stats | grep -E "total_connections_received|total_commands_processed|instantaneous_ops_per_sec|keyspace_hits|keyspace_misses"
local hits=$(redis_cmd INFO stats | grep "keyspace_hits:" | cut -d: -f2 | tr -d '\r')
local misses=$(redis_cmd INFO stats | grep "keyspace_misses:" | cut -d: -f2 | tr -d '\r')
if [[ -n "$hits" && -n "$misses" && $((hits + misses)) -gt 0 ]]; then
local ratio=$(echo "scale=2; $hits * 100 / ($hits + $misses)" | bc)
echo "keyspace_hit_ratio:${ratio}%"
fi
}
case "${1:-all}" in
memory) memory_info ;;
stats) stats ;;
all) memory_info; stats ;;
*) echo "Usage: $0 {all|memory|stats}" ;;
esacDatabase Migration Workflows
Safe database migrations with rollback support:
#!/bin/bash
set -euo pipefail
MIGRATIONS_DIR="${MIGRATIONS_DIR:-./migrations}"
PGHOST="${PGHOST:-localhost}"
PGPORT="${PGPORT:-5432}"
PGUSER="${PGUSER:-postgres}"
PGDATABASE="${PGDATABASE:-app}"
log() { echo "[$(date '+%H:%M:%S')] $*"; }
error() { echo "[$(date '+%H:%M:%S')] ERROR: $*" >&2; }
# Initialize migrations table
init() {
log "Initializing migrations table..."
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" << 'EOF'
CREATE TABLE IF NOT EXISTS schema_migrations (
id SERIAL PRIMARY KEY,
version VARCHAR(255) NOT NULL UNIQUE,
name VARCHAR(255),
applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
checksum VARCHAR(64)
);
CREATE INDEX IF NOT EXISTS idx_migrations_version ON schema_migrations(version);
EOF
log "Migrations table ready"
}
# Get pending migrations
get_pending() {
local applied=$(psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" -t -c \
"SELECT version FROM schema_migrations ORDER BY version;" | tr -d ' ')
for file in "$MIGRATIONS_DIR"/*.up.sql; do
[[ -f "$file" ]] || continue
local version=$(basename "$file" | cut -d_ -f1)
if ! echo "$applied" | grep -q "^${version}quot;; then
echo "$file"
fi
done | sort
}
# Run single migration
run_migration() {
local file="$1"
local version=$(basename "$file" | cut -d_ -f1)
local name=$(basename "$file" .up.sql | sed "s/^${version}_//")
local checksum=$(sha256sum "$file" | cut -d' ' -f1)
log "Applying: $version - $name"
psql -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" -d "$PGDATABASE" -v ON_ERROR_STOP=1 << EOF
BEGIN;
\i $file
INSERT INTO schema_migrations (version, name, checksum)
VALUES ('$version', '$name', '$checksum');
COMMIT;
EOF
log "Applied: $version"
}
# Run all pending migrations
migrate() {
local pending=$(get_pending)
if [[ -z "$pending" ]]; then
log "No pending migrations"
return 0
fi
echo "$pending" | while read file; do
run_migration "$file"
done
log "All migrations applied"
}
# Create new migration
create() {
local name="${1:?Migration name required}"
local timestamp=$(date +%Y%m%d%H%M%S)
local safe_name=$(echo "$name" | tr ' ' '_' | tr '[:upper:]' '[:lower:]')
mkdir -p "$MIGRATIONS_DIR"
echo "-- Migration: $name" > "$MIGRATIONS_DIR/${timestamp}_${safe_name}.up.sql"
echo "-- Rollback: $name" > "$MIGRATIONS_DIR/${timestamp}_${safe_name}.down.sql"
log "Created migration files for: $name"
}
case "${1:-status}" in
init) init ;;
migrate|up) migrate ;;
create) create "${2:-}" ;;
*) echo "Usage: $0 {init|migrate|create <name>}" ;;
esacTips & Quick Reference
- Always test backups. A backup you haven't restored is not a backup.
- Monitor slow queries. Enable slow query logging from day one.
- Use connection pooling. PgBouncer or application-level pooling saves connections.
- Automate maintenance. Vacuuming, analyzing, and log rotation should be automatic.
- Separate read traffic. Use replicas for reporting and analytics queries.
- Plan for growth. Partitioning and archival strategies are easier to implement early.
PostgreSQL Tuning by Server Size
| RAM | shared_buffers | effective_cache_size | work_mem |
|---|---|---|---|
| 2GB | 512MB | 1.5GB | 16MB |
| 4GB | 1GB | 3GB | 32MB |
| 8GB | 2GB | 6GB | 64MB |
| 16GB | 4GB | 12GB | 128MB |
| 32GB | 8GB | 24GB | 256MB |
Quick Reference: Database Prompts
| Need | Prompt Pattern |
|---|---|
| Setup | "Create Docker Compose for [database] tuned for [RAM] with [features]" |
| Tuning | "Generate [database] config for [workload] on [hardware specs]" |
| Backup | "Create backup script for [database] with [retention] to [destination]" |
| Monitoring | "Generate monitoring queries for [database] checking [metrics]" |
| Migration | "Create migration runner supporting [features] for [database]" |
| Replication | "Set up [sync/async] replication for [database] with [topology]" |
What's Next
With databases properly deployed and monitored, you have the foundation for any application. Your data is now backed up, replicated, and observable.
In Part 7, we'll cover Backup & Disaster Recovery Automation—ensuring you can recover from any failure scenario with encrypted backups, S3 pipelines, and verified restore testing.
