AI Development Guide

    Deploying AnythingLLM

    AnythingLLM is an open-source, all-in-one AI application that combines LLM chat, Retrieval-Augmented Generation (RAG), and AI agent workflows into a single, self-hosted platform. Deploy your private AI knowledge base on RamNode's reliable VPS hosting.

    RAG
    AI Agents
    Multi-User
    Multi-LLM
    Document Intelligence

    Key Features

    Multi-LLM Support — OpenAI, Anthropic, Gemini, Ollama, Groq, and more
    RAG — Upload documents and chat with your data via vector search
    AI Agents — Web browsing, file ops, SQL queries, chart generation
    Agent Flows — No-code automation pipelines for recurring tasks
    Multi-User & Workspaces — Role-based access with isolated environments
    Embeddable Widgets — Deploy chat on your website
    Privacy by Default — All data stays on your server
    Developer API — Full programmatic access for custom integrations
    1

    Prerequisites

    RequirementDetails
    RamNode AccountSign up at ramnode.com
    SSH ClientTerminal (macOS/Linux) or PuTTY (Windows)
    Domain NameOptional but recommended for SSL
    LLM API KeyFrom OpenAI, Anthropic, or preferred provider (or use Ollama for local models)
    DNS AccessIf using a domain, ability to create A records

    💡 Tip: If you plan to use cloud-based LLM providers (OpenAI, Anthropic, etc.), you do NOT need a GPU. The VPS handles the application and document processing while the LLM provider handles inference. This keeps your hosting costs low.

    2

    RamNode VPS Plan Selection

    AnythingLLM is remarkably lightweight when paired with external LLM providers. The right plan depends on your expected usage:

    Use CaseRAMvCPUStorageNotes
    Personal / Testing2 GB1 core20 GBSingle user with cloud LLM
    Small Team (2–5)4 GB2 cores40 GBRecommended starting point
    Medium Team (5–15)8 GB4 cores80 GBLarge document libraries
    Large / Heavy RAG16 GB+6+ cores160 GB+Extensive ingestion, concurrent users

    Important: If you plan to run a local LLM (e.g., Ollama) on the same server, you need significantly more RAM — at least 16 GB for small models (7B parameters) and 32 GB+ for larger models.

    3

    Initial Server Setup

    Connect and update system
    ssh root@YOUR_SERVER_IP
    apt update && apt upgrade -y

    Create a Non-Root User

    Create dedicated user
    adduser anythingllm
    usermod -aG sudo anythingllm
    su - anythingllm
    Configure firewall
    sudo ufw allow OpenSSH
    sudo ufw allow 80/tcp
    sudo ufw allow 443/tcp
    sudo ufw enable

    🔒 Security Note: Do not expose port 3001 directly to the internet. We will configure Nginx as a reverse proxy later to handle SSL termination.

    4

    Install Docker

    Install prerequisites and Docker GPG key
    # Install prerequisites
    sudo apt install -y ca-certificates curl gnupg lsb-release
    
    # Add Docker's official GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
      sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
    # Add the Docker repository
    echo "deb [arch=$(dpkg --print-architecture) \
      signed-by=/etc/apt/keyrings/docker.gpg] \
      https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    Install Docker Engine
    sudo apt update
    sudo apt install -y docker-ce docker-ce-cli containerd.io \
      docker-buildx-plugin docker-compose-plugin
    
    # Configure for non-root user
    sudo usermod -aG docker anythingllm
    newgrp docker
    
    # Verify installation
    docker --version
    5

    Deploy AnythingLLM with Docker

    Create storage directory
    export STORAGE_LOCATION=$HOME/anythingllm
    mkdir -p $STORAGE_LOCATION
    touch "$STORAGE_LOCATION/.env"
    Pull and run AnythingLLM
    # Pull the latest image
    docker pull mintplexlabs/anythingllm
    
    # Run the container
    docker run -d \
      --name anythingllm \
      --restart unless-stopped \
      -p 3001:3001 \
      --cap-add SYS_ADMIN \
      -v ${STORAGE_LOCATION}:/app/server/storage \
      -v ${STORAGE_LOCATION}/.env:/app/server/.env \
      -e STORAGE_DIR="/app/server/storage" \
      mintplexlabs/anythingllm
    Verify the container
    docker ps
    docker logs anythingllm --tail 50

    Docker Run Flags Explained

    FlagPurpose
    --restart unless-stoppedAuto-restart on reboot or crash
    -p 3001:3001Map container port to host
    --cap-add SYS_ADMINRequired for web scraping browser tool
    -v (storage)Persist data outside container
    -v (.env)Mount environment config file

    Alternative: Docker Compose

    ~/anythingllm/docker-compose.yml
    version: "3.8"
    services:
      anythingllm:
        image: mintplexlabs/anythingllm
        container_name: anythingllm
        restart: unless-stopped
        ports:
          - "3001:3001"
        cap_add:
          - SYS_ADMIN
        volumes:
          - ./storage:/app/server/storage
          - ./.env:/app/server/.env
        environment:
          - STORAGE_DIR=/app/server/storage
    Start with Docker Compose
    cd ~/anythingllm
    docker compose up -d
    6

    Initial Configuration & Setup Wizard

    Access http://YOUR_SERVER_IP:3001 in your browser. The setup wizard walks you through:

    1. LLM Provider Selection — Choose your preferred language model provider
    2. Embedding Provider — The built-in AnythingLLM embedder works great for most users
    3. Vector Database — LanceDB is the default and requires no additional setup
    4. Appearance — Customize the look and feel (optional)
    5. Admin Account — Create your administrator username and password

    Creating Your First Workspace

    1. Click the "+ New Workspace" button in the sidebar
    2. Give your workspace a descriptive name (e.g., "Company Knowledge Base")
    3. Optionally set a custom system prompt
    4. Choose chat mode: Chat for conversational AI or Query for strict document-based answers
    7

    Configuring Your LLM Provider

    Cloud-Based Providers (Recommended for VPS)

    ProviderSetupBest For
    OpenAIEnter API key; select modelGeneral purpose, strong reasoning
    AnthropicEnter API key; select modelLong context, nuanced analysis
    Google GeminiEnter API key; select modelMultimodal, large context windows
    GroqEnter API key; select modelUltra-fast inference speeds
    OpenRouterSingle API key, 100+ modelsMaximum flexibility

    Local LLM with Ollama (Optional)

    Install Ollama on the host
    # Install Ollama
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Pull a model (example: Llama 3.1 8B)
    ollama pull llama3.1:8b

    In AnythingLLM settings, select "Ollama" as the LLM provider and set the URL to http://host.docker.internal:11434.

    💡 Docker Networking Note: On Linux, add --add-host=host.docker.internal:host-gateway to your docker run command when connecting to host services like Ollama.

    8

    Working with Documents & RAG

    Supported Document Types

    PDF (.pdf)
    Word (.docx)
    Text (.txt)
    Markdown (.md)
    Web pages (URL)
    YouTube transcripts
    Confluence
    Notion
    Raw text

    Uploading Documents

    1. Navigate to your workspace
    2. Click the upload icon or drag and drop files
    3. AnythingLLM processes and chunks the documents automatically
    4. Move documents into the workspace to make them available for chat

    Chat Modes

    Chat Mode

    Documents are supplementary context alongside the LLM's own knowledge. Answers even if documents don't contain relevant info.

    Query Mode

    Only answers based on document content. Ideal for strict knowledge base applications where accuracy is critical.

    9

    Setting Up AI Agents

    Built-in Agent Skills

    • Web Browsing — Browse the internet and extract information
    • Web Scraping — Collect and analyze data from websites
    • Document Summarization — Automatically summarize uploaded documents
    • Chart Generation — Create visual charts from data
    • SQL Queries — Query databases through natural language
    • File Operations — Read and write files within the workspace

    Agent Flows (No-Code Automation)

    Build multi-step automation pipelines without code. Chain together web scrapers, API calls, LLM instructions, and file operations. For example, create a flow that scrapes industry news, summarizes it, and saves a daily briefing.

    Custom Agent Skills

    Developers can extend agent capabilities with custom JavaScript skills. Each skill includes a plugin.json manifest and a handler.js file for calling external APIs and integrating with existing tools.

    10

    Multi-User Configuration

    User Roles

    RolePermissions
    AdminFull system access; manage users, workspaces, settings, LLM configuration
    ManagerCreate/manage workspaces; upload documents; manage users within assigned workspaces
    DefaultChat within assigned workspaces; cannot modify settings or manage documents

    Inviting Users

    1. Go to Settings → User Management in the admin panel
    2. Click "Add User" and enter the new user's username
    3. Assign a role (Admin, Manager, or Default)
    4. Set workspace permissions to control access
    5. Share the login URL and credentials with the user
    11

    Secure with Nginx & SSL

    Install Nginx and Certbot
    sudo apt install -y nginx certbot python3-certbot-nginx
    /etc/nginx/sites-available/anythingllm
    server {
        listen 80;
        server_name your-domain.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    
            # Important for large document uploads
            client_max_body_size 100M;
        }
    }
    Enable site and obtain SSL
    sudo ln -s /etc/nginx/sites-available/anythingllm \
      /etc/nginx/sites-enabled/
    sudo nginx -t
    sudo systemctl reload nginx
    
    # Obtain and install SSL certificate
    sudo certbot --nginx -d your-domain.com

    SSL Auto-Renewal: Certbot automatically sets up a systemd timer. Verify with: sudo certbot renew --dry-run

    12

    Backup & Maintenance

    13

    Environment Variable Reference

    VariableDescription
    STORAGE_DIRPath to persistent storage inside the container
    SERVER_PORTPort AnythingLLM listens on (default: 3001)
    DISABLE_TELEMETRYSet to "true" to opt out of anonymous usage telemetry
    JWT_SECRETCustom secret for JWT token generation (auto-generated if not set)
    AUTH_TOKENPassword protect your instance with a single shared token
    PASSWORDMINCHARMinimum password length for user accounts (default: 8)

    🔒 Privacy Tip: To disable all telemetry, add DISABLE_TELEMETRY="true" to your .env file before starting the container. You can also disable telemetry through in-app settings under Privacy.

    14

    Troubleshooting

    AnythingLLM Deployed Successfully!

    Your private AI knowledge base is now running. Here are some next steps:

    • Explore Agent Flows to automate research and reporting tasks
    • Set up embeddable chat widgets for your website
    • Connect additional LLM providers to compare responses
    • Build custom agent skills for your team's tools and APIs
    • Configure workspace-specific system prompts
    • Join the AnythingLLM Discord community