AI Image Generation Guide

    Deploy Automatic1111

    Automatic1111 is one of the most popular open-source frontends for running Stable Diffusion. Self-host your own AI image generation pipeline on RamNode's reliable VPS hosting — running 24/7 without tying up your local machine.

    Stable Diffusion
    CPU & GPU Support
    Model Management
    Extensions
    1

    Prerequisites

    Recommended VPS Specs

    Automatic1111 is GPU-accelerated but can run in CPU-only mode. Here is what to plan for:

    Use CaseRAMvCPUsStorageNotes
    CPU-only (testing)8 GB440 GBSlow — 60–300s per image
    CPU-only (workable)16 GB880 GBBetter for batch jobs
    GPU-accelerated16 GB+4+80 GB+Requires GPU VPS

    Storage note: Base Stable Diffusion v1.5 checkpoint is ~4 GB. SDXL models are 6–7 GB each. Budget at least 40 GB for a single-model setup, more if you plan to collect LoRAs, VAEs, and multiple checkpoints.

    What You Need Before Starting

    • A RamNode KVM VPS running Ubuntu 22.04 LTS
    • Root or sudo SSH access
    • A domain or your VPS IP for remote access
    • Basic familiarity with the Linux command line
    2

    Update System & Install Dependencies

    Update system packages
    apt update && apt upgrade -y
    Install required system packages
    apt install -y \
      wget curl git python3 python3-pip python3-venv \
      libgl1 libglib2.0-0 libsm6 libxext6 libxrender-dev \
      build-essential libssl-dev libffi-dev python3-dev \
      ffmpeg libopencv-dev bc

    Python 3.10+ ships with Ubuntu 22.04, which satisfies Automatic1111's requirements. Verify:

    Verify Python version
    python3 --version
    3

    Create a Dedicated User

    Running Automatic1111 as root is not recommended. Create a dedicated user:

    Create sduser
    adduser --disabled-password --gecos "" sduser
    usermod -aG sudo sduser
    su - sduser

    All remaining steps run as sduser unless noted otherwise.

    4

    Install Automatic1111

    Clone the repository
    cd ~
    git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
    cd stable-diffusion-webui

    GPU Acceleration (NVIDIA, if applicable)

    If your RamNode VPS has an NVIDIA GPU, install CUDA Toolkit 11.8:

    Install CUDA Toolkit
    wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
    sudo dpkg -i cuda-keyring_1.0-1_all.deb
    sudo apt update
    sudo apt install -y cuda-toolkit-11-8
    Verify GPU detection
    nvidia-smi

    CPU-Only Mode

    No additional setup is needed for CPU-only. The launch script handles the rest with the --skip-torch-cuda-test flag covered in Step 6.

    5

    Download a Stable Diffusion Model

    Automatic1111 requires at least one checkpoint model to start. Place models in ~/stable-diffusion-webui/models/Stable-diffusion/

    Option A — Stable Diffusion v1.5 (~4 GB)

    Download SD v1.5
    cd ~/stable-diffusion-webui/models/Stable-diffusion/
    wget -O v1-5-pruned-emaonly.safetensors \
      "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"

    Option B — SDXL Base 1.0 (~6.5 GB, higher quality)

    Download SDXL Base
    cd ~/stable-diffusion-webui/models/Stable-diffusion/
    wget -O sd_xl_base_1.0.safetensors \
      "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"

    Tip: Use safetensors format over .ckpt whenever available — it loads faster and has no arbitrary code execution risk.

    6

    Configure the Launch Script

    Create a webui-user.sh file to customize startup flags:

    Prepare launch script
    cd ~/stable-diffusion-webui
    cp webui-user.sh.example webui-user.sh 2>/dev/null || touch webui-user.sh
    nano webui-user.sh

    GPU Setup (NVIDIA)

    webui-user.sh (GPU)
    #!/bin/bash
    export COMMANDLINE_ARGS="--listen --port 7860 --enable-insecure-extension-access"

    CPU-Only Setup

    webui-user.sh (CPU-only)
    #!/bin/bash
    export COMMANDLINE_ARGS="--listen --port 7860 --skip-torch-cuda-test --precision full --no-half --enable-insecure-extension-access"
    Make executable
    chmod +x webui-user.sh

    Note: --listen binds to 0.0.0.0 so the UI is accessible remotely. You will secure this with SSH tunneling or a reverse proxy in Step 8.

    7

    First Launch

    Run the web UI for the first time. This will take 5–15 minutes as it sets up a Python virtual environment and downloads dependencies (~5 GB):

    Launch the web UI
    cd ~/stable-diffusion-webui
    ./webui.sh

    Watch the output for the line:

    Expected output
    Running on local URL:  http://0.0.0.0:7860

    Once you see this, the web UI is running. Leave this terminal session open (or use screen/tmux to background it).

    8

    Secure Access

    Port 7860 should not be exposed publicly without authentication. Use one of these methods:

    Option A — SSH Tunnel (Simplest)

    From your local machine:

    Create SSH tunnel
    ssh -L 7860:localhost:7860 sduser@YOUR_VPS_IP

    Then open http://localhost:7860 in your browser. The UI is only accessible through your SSH session.

    Option B — Nginx Reverse Proxy with Password Auth

    Install Nginx and create password
    sudo apt install -y nginx apache2-utils
    sudo htpasswd -c /etc/nginx/.htpasswd yourusername
    /etc/nginx/sites-available/automatic1111
    server {
        listen 80;
        server_name YOUR_DOMAIN_OR_IP;
    
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/.htpasswd;
    
        location / {
            proxy_pass http://127.0.0.1:7860;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_read_timeout 300;
        }
    }
    Enable and reload
    sudo ln -s /etc/nginx/sites-available/automatic1111 /etc/nginx/sites-enabled/
    sudo nginx -t && sudo systemctl reload nginx

    Add HTTPS with Let's Encrypt

    Obtain TLS certificate
    sudo apt install -y certbot python3-certbot-nginx
    sudo certbot --nginx -d yourdomain.com
    9

    Run as a Persistent Service

    Keep Automatic1111 running after SSH disconnect using systemd:

    /etc/systemd/system/automatic1111.service
    [Unit]
    Description=Automatic1111 Stable Diffusion Web UI
    After=network.target
    
    [Service]
    Type=simple
    User=sduser
    WorkingDirectory=/home/sduser/stable-diffusion-webui
    ExecStart=/home/sduser/stable-diffusion-webui/webui.sh
    Restart=on-failure
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target
    Enable and start the service
    sudo systemctl daemon-reload
    sudo systemctl enable automatic1111
    sudo systemctl start automatic1111
    Check status
    sudo systemctl status automatic1111
    journalctl -u automatic1111 -f
    10

    Managing Models and Storage

    Check disk usage
    du -sh ~/stable-diffusion-webui/models/*
    df -h

    Model Directory Structure

    Directory layout
    models/
    ├── Stable-diffusion/   # Main checkpoint files (.safetensors, .ckpt)
    ├── Lora/               # LoRA weights
    ├── VAE/                # VAE files
    ├── ControlNet/         # ControlNet models
    └── embeddings/         # Textual inversion embeddings

    Reload Models Without Restarting

    In the web UI, go to Settings → Actions → Reload UI or click the refresh button next to the model dropdown to pick up new files dropped into the models directory.

    11

    Performance Tuning

    Reduce VRAM/RAM Usage

    Add these flags to COMMANDLINE_ARGS in webui-user.sh for lower-memory systems:

    GPU memory optimization flags
    --medvram          # Use medium VRAM mode (GPU setups)
    --opt-split-attention  # Reduce peak VRAM usage
    --no-half-vae      # Fix black image issues with some VAEs

    For CPU-only on a RAM-constrained server:

    CPU memory optimization
    --lowvram --always-batch-cond-uncond

    Xformers (GPU only, significant speed boost)

    Install xformers
    sudo -u sduser pip install xformers

    Add --xformers to COMMANDLINE_ARGS.

    Troubleshooting

    IssueSolution
    Black/corrupted imagesAdd --no-half-vae to launch args
    CUDA out of memoryAdd --medvram or --lowvram to launch args
    Very slow on CPUExpected — reduce resolution to 512×512 and lower steps to 20
    Port 7860 unreachableVerify --listen is set and check sudo ufw status
    webui.sh exits immediatelyCheck journalctl -u automatic1111 --since "5 minutes ago"

    Automatic1111 Deployed Successfully!

    Your self-hosted Stable Diffusion Web UI is now running on a RamNode VPS. Explore these extensions to expand functionality:

    • ControlNet — Precise image composition control
    • ADetailer — Automatic face/hand touch-up on generated images
    • Civitai Helper — Browse and download models from Civitai directly in the UI
    • sd-webui-regional-prompter — Multi-subject compositions with region-specific prompts

    Install extensions via Extensions → Install from URL inside the web UI.