Prerequisites
Recommended VPS Specs
Automatic1111 is GPU-accelerated but can run in CPU-only mode. Here is what to plan for:
| Use Case | RAM | vCPUs | Storage | Notes |
|---|---|---|---|---|
| CPU-only (testing) | 8 GB | 4 | 40 GB | Slow — 60–300s per image |
| CPU-only (workable) | 16 GB | 8 | 80 GB | Better for batch jobs |
| GPU-accelerated | 16 GB+ | 4+ | 80 GB+ | Requires GPU VPS |
Storage note: Base Stable Diffusion v1.5 checkpoint is ~4 GB. SDXL models are 6–7 GB each. Budget at least 40 GB for a single-model setup, more if you plan to collect LoRAs, VAEs, and multiple checkpoints.
What You Need Before Starting
- A RamNode KVM VPS running Ubuntu 22.04 LTS
- Root or sudo SSH access
- A domain or your VPS IP for remote access
- Basic familiarity with the Linux command line
Update System & Install Dependencies
apt update && apt upgrade -yapt install -y \
wget curl git python3 python3-pip python3-venv \
libgl1 libglib2.0-0 libsm6 libxext6 libxrender-dev \
build-essential libssl-dev libffi-dev python3-dev \
ffmpeg libopencv-dev bcPython 3.10+ ships with Ubuntu 22.04, which satisfies Automatic1111's requirements. Verify:
python3 --versionCreate a Dedicated User
Running Automatic1111 as root is not recommended. Create a dedicated user:
adduser --disabled-password --gecos "" sduser
usermod -aG sudo sduser
su - sduserAll remaining steps run as sduser unless noted otherwise.
Install Automatic1111
cd ~
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webuiGPU Acceleration (NVIDIA, if applicable)
If your RamNode VPS has an NVIDIA GPU, install CUDA Toolkit 11.8:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt update
sudo apt install -y cuda-toolkit-11-8nvidia-smiCPU-Only Mode
No additional setup is needed for CPU-only. The launch script handles the rest with the --skip-torch-cuda-test flag covered in Step 6.
Download a Stable Diffusion Model
Automatic1111 requires at least one checkpoint model to start. Place models in ~/stable-diffusion-webui/models/Stable-diffusion/
Option A — Stable Diffusion v1.5 (~4 GB)
cd ~/stable-diffusion-webui/models/Stable-diffusion/
wget -O v1-5-pruned-emaonly.safetensors \
"https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"Option B — SDXL Base 1.0 (~6.5 GB, higher quality)
cd ~/stable-diffusion-webui/models/Stable-diffusion/
wget -O sd_xl_base_1.0.safetensors \
"https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"Tip: Use safetensors format over .ckpt whenever available — it loads faster and has no arbitrary code execution risk.
Configure the Launch Script
Create a webui-user.sh file to customize startup flags:
cd ~/stable-diffusion-webui
cp webui-user.sh.example webui-user.sh 2>/dev/null || touch webui-user.sh
nano webui-user.shGPU Setup (NVIDIA)
#!/bin/bash
export COMMANDLINE_ARGS="--listen --port 7860 --enable-insecure-extension-access"CPU-Only Setup
#!/bin/bash
export COMMANDLINE_ARGS="--listen --port 7860 --skip-torch-cuda-test --precision full --no-half --enable-insecure-extension-access"chmod +x webui-user.shNote: --listen binds to 0.0.0.0 so the UI is accessible remotely. You will secure this with SSH tunneling or a reverse proxy in Step 8.
First Launch
Run the web UI for the first time. This will take 5–15 minutes as it sets up a Python virtual environment and downloads dependencies (~5 GB):
cd ~/stable-diffusion-webui
./webui.shWatch the output for the line:
Running on local URL: http://0.0.0.0:7860Once you see this, the web UI is running. Leave this terminal session open (or use screen/tmux to background it).
Secure Access
Port 7860 should not be exposed publicly without authentication. Use one of these methods:
Option A — SSH Tunnel (Simplest)
From your local machine:
ssh -L 7860:localhost:7860 sduser@YOUR_VPS_IPThen open http://localhost:7860 in your browser. The UI is only accessible through your SSH session.
Option B — Nginx Reverse Proxy with Password Auth
sudo apt install -y nginx apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd yourusernameserver {
listen 80;
server_name YOUR_DOMAIN_OR_IP;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
proxy_pass http://127.0.0.1:7860;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 300;
}
}sudo ln -s /etc/nginx/sites-available/automatic1111 /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginxAdd HTTPS with Let's Encrypt
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.comRun as a Persistent Service
Keep Automatic1111 running after SSH disconnect using systemd:
[Unit]
Description=Automatic1111 Stable Diffusion Web UI
After=network.target
[Service]
Type=simple
User=sduser
WorkingDirectory=/home/sduser/stable-diffusion-webui
ExecStart=/home/sduser/stable-diffusion-webui/webui.sh
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable automatic1111
sudo systemctl start automatic1111sudo systemctl status automatic1111
journalctl -u automatic1111 -fManaging Models and Storage
du -sh ~/stable-diffusion-webui/models/*
df -hModel Directory Structure
models/
├── Stable-diffusion/ # Main checkpoint files (.safetensors, .ckpt)
├── Lora/ # LoRA weights
├── VAE/ # VAE files
├── ControlNet/ # ControlNet models
└── embeddings/ # Textual inversion embeddingsReload Models Without Restarting
In the web UI, go to Settings → Actions → Reload UI or click the refresh button next to the model dropdown to pick up new files dropped into the models directory.
Performance Tuning
Reduce VRAM/RAM Usage
Add these flags to COMMANDLINE_ARGS in webui-user.sh for lower-memory systems:
--medvram # Use medium VRAM mode (GPU setups)
--opt-split-attention # Reduce peak VRAM usage
--no-half-vae # Fix black image issues with some VAEsFor CPU-only on a RAM-constrained server:
--lowvram --always-batch-cond-uncondXformers (GPU only, significant speed boost)
sudo -u sduser pip install xformersAdd --xformers to COMMANDLINE_ARGS.
Troubleshooting
| Issue | Solution |
|---|---|
| Black/corrupted images | Add --no-half-vae to launch args |
| CUDA out of memory | Add --medvram or --lowvram to launch args |
| Very slow on CPU | Expected — reduce resolution to 512×512 and lower steps to 20 |
| Port 7860 unreachable | Verify --listen is set and check sudo ufw status |
| webui.sh exits immediately | Check journalctl -u automatic1111 --since "5 minutes ago" |
Automatic1111 Deployed Successfully!
Your self-hosted Stable Diffusion Web UI is now running on a RamNode VPS. Explore these extensions to expand functionality:
- ControlNet — Precise image composition control
- ADetailer — Automatic face/hand touch-up on generated images
- Civitai Helper — Browse and download models from Civitai directly in the UI
- sd-webui-regional-prompter — Multi-subject compositions with region-specific prompts
Install extensions via Extensions → Install from URL inside the web UI.
