Podman vs Docker
| Feature | Podman | Docker |
|---|---|---|
| Architecture | Daemonless (fork/exec) | Client-server (dockerd daemon) |
| Root requirement | Rootless by default | Requires root or docker group |
| Systemd integration | Native | Requires additional config |
| Docker CLI compatibility | Full (alias docker=podman) | Native |
| Pods | Native support | Requires Docker Compose |
| Resource overhead | Lower (no daemon) | Higher (persistent daemon) |
Why RamNode for Podman
- KVM virtualization provides full kernel access for rootless containers
- Starting at $4/month with generous bandwidth allocations
- $500 annual credit for new accounts
- Multiple datacenter locations (US, NL) for low-latency deployments
Prerequisites
- A RamNode KVM VPS with at least 2 GB RAM (4 GB recommended)
- Ubuntu 24.04 LTS installed
- SSH access with a non-root sudo user configured
- Basic Linux command-line familiarity
Initial Server Setup
sudo apt update && sudo apt upgrade -y
sudo rebootCreate a Dedicated User
If logged in as root, create a non-root user for rootless Podman:
adduser poduser
usermod -aG sudo poduser
su - poduserConfigure Firewall
sudo ufw allow OpenSSH
sudo ufw enable
sudo ufw statusInstalling Podman
Ubuntu 24.04 includes Podman in its default repositories:
sudo apt install -y podman podman-compose slirp4netns uidmap fuse-overlayfspodman --version
podman info --format '{{.Host.Security.Rootless}}'The second command should return true, confirming rootless mode.
Configure Subuid/Subgid Ranges
grep poduser /etc/subuid
grep poduser /etc/subgid
# If no entries appear, add them:
sudo usermod --add-subuids 100000-165535 poduser
sudo usermod --add-subgids 100000-165535 poduserConfigure Registries
mkdir -p ~/.config/containers
cat > ~/.config/containers/registries.conf << 'EOF'
[registries.search]
registries = ['docker.io', 'quay.io', 'ghcr.io']
[registries.insecure]
registries = []
EOFDocker CLI Compatibility (Optional)
echo 'alias docker=podman' >> ~/.bashrc
source ~/.bashrcContainer Management Basics
# Pull the latest Nginx image
podman pull docker.io/library/nginx:latest
# Run in detached mode with port mapping
podman run -d --name webserver -p 8080:80 nginx:latest
# Verify it's running
podman ps
curl http://localhost:8080Volume Management
# Create a named volume
podman volume create app_data
# Mount the volume in a container
podman run -d --name postgres \
-v app_data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=securepassword \
-p 5432:5432 \
docker.io/library/postgres:16
# List and inspect volumes
podman volume ls
podman volume inspect app_data💡 Tip
Named volumes persist data independently of container lifecycle. Use bind mounts (-v /host/path:/container/path) when you need direct access to files on the host filesystem.
Networking
Rootless Networking with slirp4netns
In rootless mode, ports above 1024 work without special configuration. For ports below 1024 (80, 443), use a reverse proxy or allow unprivileged port binding.
Option 1: Reverse Proxy (Recommended)
sudo apt install -y nginx
sudo tee /etc/nginx/sites-available/podman-proxy << 'EOF'
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/podman-proxy /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginxOption 2: Allow Unprivileged Port Binding
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
echo 'net.ipv4.ip_unprivileged_port_start=80' | sudo tee -a /etc/sysctl.confContainer-to-Container Networking
# Create a custom network
podman network create app_network
# Run containers on the same network
podman run -d --name db --network app_network \
-e POSTGRES_PASSWORD=secret postgres:16
podman run -d --name app --network app_network \
-e DATABASE_URL=postgresql://postgres:secret@db:5432/postgres \
-p 8080:3000 myapp:latestContainers on the same Podman network can resolve each other by container name.
Working with Pods
Pods group multiple containers sharing the same network namespace — ideal for tightly coupled services.
# Create a pod with port mappings
podman pod create --name webapp \
-p 8080:80 \
-p 5432:5432
# Add containers to the pod
podman run -d --pod webapp --name webapp-nginx nginx:latest
podman run -d --pod webapp --name webapp-db \
-e POSTGRES_PASSWORD=secret postgres:16
# View pod status
podman pod ps
podman pod inspect webapp
# Stop/start entire pod
podman pod stop webapp
podman pod start webapp📌 Note
All containers in a pod share localhost. Nginx can reach Postgres at localhost:5432 without any network configuration.
Using Podman Compose
Example deploying a WordPress stack:
version: '3.8'
services:
db:
image: docker.io/library/mariadb:11
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: wordpress
MYSQL_USER: wpuser
MYSQL_PASSWORD: wppassword
volumes:
- db_data:/var/lib/mysql
wordpress:
image: docker.io/library/wordpress:latest
restart: always
ports:
- '8080:80'
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wpuser
WORDPRESS_DB_PASSWORD: wppassword
WORDPRESS_DB_NAME: wordpress
volumes:
- wp_data:/var/www/html
depends_on:
- db
volumes:
db_data:
wp_data:mkdir -p ~/wordpress && cd ~/wordpress
# (save the above as docker-compose.yml)
podman-compose up -d
podman-compose psSystemd Integration
Podman integrates natively with systemd for automatic container startup on boot — one of its strongest advantages over Docker.
# Generate a user-level systemd service
mkdir -p ~/.config/systemd/user
podman generate systemd --name webserver --new --files
mv container-webserver.service ~/.config/systemd/user/
# Reload and enable
systemctl --user daemon-reload
systemctl --user enable container-webserver.service
systemctl --user start container-webserver.service
# Check status
systemctl --user status container-webserver.serviceEnable Linger for Boot Persistence
User-level systemd services only run when the user is logged in. Enable linger for boot persistence:
sudo loginctl enable-linger poduserGenerate Systemd for Pods
podman generate systemd --name webapp --new --files
mv pod-webapp.service container-webapp-*.service \
~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable pod-webapp.service💡 Tip
Use the --new flag when generating systemd units. This creates fresh containers on each start rather than restarting stopped ones, ensuring clean state.
Automatic Image Updates
# Run a container with the auto-update label
podman run -d --name webserver \
--label io.containers.autoupdate=registry \
-p 8080:80 docker.io/library/nginx:latest
# Generate systemd unit (required for auto-update)
podman generate systemd --name webserver --new --files
mv container-webserver.service ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable container-webserver.service
# Enable the auto-update timer
systemctl --user enable podman-auto-update.timer
systemctl --user start podman-auto-update.timer
# Check what would be updated (dry run)
podman auto-update --dry-runBuilding Custom Images
Podman uses Buildah under the hood and is fully compatible with standard Dockerfiles.
FROM docker.io/library/node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
USER node
CMD ["node", "server.js"]# Build the image
podman build -t myapp:latest .
# Run the custom image
podman run -d --name myapp -p 3000:3000 myapp:latest📌 Note
Podman supports both Containerfile and Dockerfile naming. Using Containerfile is the OCI-standard convention.
Security Hardening
Rootless Security Best Practices
- Always run containers as a non-root user (rootless mode is the default)
- Use read-only root filesystems where possible:
--read-only - Drop all capabilities and add only what's needed:
--cap-drop=ALL --cap-add=NET_BIND_SERVICE - Set memory and CPU limits to prevent resource exhaustion
- Use
--security-opt=no-new-privilegesto prevent privilege escalation
podman run -d --name secure-nginx \
--read-only \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--security-opt=no-new-privileges \
--memory=256m \
--cpus=0.5 \
--tmpfs /var/cache/nginx:rw,size=64m \
--tmpfs /var/run:rw,size=1m \
-p 8080:80 \
nginx:latestFirewall Rules for Containers
sudo ufw allow 8080/tcp comment 'Podman web service'
sudo ufw allow 443/tcp comment 'HTTPS via reverse proxy'
sudo ufw reloadResource Optimization
Recommended Resource Limits by Plan
| RamNode Plan | RAM | Max Containers | Per-Container Limit |
|---|---|---|---|
| 2 GB VPS | 2 GB | 3–4 lightweight | 256–512 MB each |
| 4 GB VPS | 4 GB | 6–8 mixed | 512 MB–1 GB each |
| 8 GB VPS | 8 GB | 12–16 | 512 MB–2 GB each |
| 16 GB+ VPS | 16+ GB | 20+ | As needed |
Storage Cleanup
# Remove stopped containers, unused images, and build cache
podman system prune -a --volumes
# Check disk usage
podman system df
# Set up automatic cleanup via cron
(crontab -l 2>/dev/null; echo '0 3 * * 0 podman system prune -a -f --volumes') | crontab -Monitoring & Logging
# Live resource usage for all containers
podman stats --all
# View container logs with timestamps
podman logs -t --since 1h webserver
# Follow logs in real time
podman logs -f webserverHealth Checks
podman run -d --name webserver \
--health-cmd='curl -f http://localhost/ || exit 1' \
--health-interval=30s \
--health-retries=3 \
--health-timeout=5s \
-p 8080:80 nginx:latest
# Check health status
podman healthcheck run webserver
podman inspect --format='{{.State.Health.Status}}' webserverBackup & Recovery
# Export a container filesystem
podman export webserver > webserver-backup.tar
# Commit a running container to an image
podman commit webserver webserver-snapshot:$(date +%Y%m%d)
# Save an image to a tar archive
podman save -o myapp-backup.tar myapp:latest
# Restore from archive
podman load -i myapp-backup.tarVolume Backups
# Backup a named volume
podman run --rm \
-v app_data:/source:ro \
-v $(pwd):/backup \
docker.io/library/alpine \
tar czf /backup/app_data-$(date +%Y%m%d).tar.gz -C /source .
# Restore a volume
podman run --rm \
-v app_data:/target \
-v $(pwd):/backup:ro \
docker.io/library/alpine \
tar xzf /backup/app_data-20250210.tar.gz -C /target💡 Tip
Automate volume backups with cron jobs and consider syncing backup archives to offsite storage using tools like rclone or restic.
Troubleshooting
| Issue | Solution |
|---|---|
| CNI config validation error | sudo apt install containernetworking-plugins |
| Permission denied on ports < 1024 | sudo sysctl net.ipv4.ip_unprivileged_port_start=80 |
| Subuid/subgid mapping errors | sudo usermod --add-subuids 100000-165535 $USER |
| Container DNS resolution fails | Check /etc/resolv.conf and ensure slirp4netns is installed |
| Docker Hub rate limits | podman login docker.io or use alternative registries |
| Containers not starting after reboot | sudo loginctl enable-linger $USER |
| Storage driver errors | podman system reset (deletes all containers/images) |
🎉 Next Steps
- Set up TLS termination with Caddy or Nginx for HTTPS
- Explore Podman Quadlet for declarative container management with systemd
- Deploy a container registry on your RamNode VPS for private image hosting
- Implement centralized logging with Loki or a self-hosted Grafana stack
- Scale to multi-node deployments using Podman with Kubernetes YAML support
