Key Features
Prerequisites
| Requirement | Details |
|---|---|
| RamNode Account | Sign up at ramnode.com |
| SSH Client | Terminal (macOS/Linux) or PuTTY (Windows) |
| Domain Name | Optional but recommended for SSL |
| LLM API Key | From OpenAI, Anthropic, or preferred provider (or use Ollama for local models) |
| DNS Access | If using a domain, ability to create A records |
💡 Tip: If you plan to use cloud-based LLM providers (OpenAI, Anthropic, etc.), you do NOT need a GPU. The VPS handles the application and document processing while the LLM provider handles inference. This keeps your hosting costs low.
RamNode VPS Plan Selection
AnythingLLM is remarkably lightweight when paired with external LLM providers. The right plan depends on your expected usage:
| Use Case | RAM | vCPU | Storage | Notes |
|---|---|---|---|---|
| Personal / Testing | 2 GB | 1 core | 20 GB | Single user with cloud LLM |
| Small Team (2–5) | 4 GB | 2 cores | 40 GB | Recommended starting point |
| Medium Team (5–15) | 8 GB | 4 cores | 80 GB | Large document libraries |
| Large / Heavy RAG | 16 GB+ | 6+ cores | 160 GB+ | Extensive ingestion, concurrent users |
Important: If you plan to run a local LLM (e.g., Ollama) on the same server, you need significantly more RAM — at least 16 GB for small models (7B parameters) and 32 GB+ for larger models.
Initial Server Setup
ssh root@YOUR_SERVER_IP
apt update && apt upgrade -yCreate a Non-Root User
adduser anythingllm
usermod -aG sudo anythingllm
su - anythingllmsudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable🔒 Security Note: Do not expose port 3001 directly to the internet. We will configure Nginx as a reverse proxy later to handle SSL termination.
Install Docker
# Install prerequisites
sudo apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin
# Configure for non-root user
sudo usermod -aG docker anythingllm
newgrp docker
# Verify installation
docker --versionDeploy AnythingLLM with Docker
export STORAGE_LOCATION=$HOME/anythingllm
mkdir -p $STORAGE_LOCATION
touch "$STORAGE_LOCATION/.env"# Pull the latest image
docker pull mintplexlabs/anythingllm
# Run the container
docker run -d \
--name anythingllm \
--restart unless-stopped \
-p 3001:3001 \
--cap-add SYS_ADMIN \
-v ${STORAGE_LOCATION}:/app/server/storage \
-v ${STORAGE_LOCATION}/.env:/app/server/.env \
-e STORAGE_DIR="/app/server/storage" \
mintplexlabs/anythingllmdocker ps
docker logs anythingllm --tail 50Docker Run Flags Explained
| Flag | Purpose |
|---|---|
| --restart unless-stopped | Auto-restart on reboot or crash |
| -p 3001:3001 | Map container port to host |
| --cap-add SYS_ADMIN | Required for web scraping browser tool |
| -v (storage) | Persist data outside container |
| -v (.env) | Mount environment config file |
Alternative: Docker Compose
version: "3.8"
services:
anythingllm:
image: mintplexlabs/anythingllm
container_name: anythingllm
restart: unless-stopped
ports:
- "3001:3001"
cap_add:
- SYS_ADMIN
volumes:
- ./storage:/app/server/storage
- ./.env:/app/server/.env
environment:
- STORAGE_DIR=/app/server/storagecd ~/anythingllm
docker compose up -dInitial Configuration & Setup Wizard
Access http://YOUR_SERVER_IP:3001 in your browser. The setup wizard walks you through:
- LLM Provider Selection — Choose your preferred language model provider
- Embedding Provider — The built-in AnythingLLM embedder works great for most users
- Vector Database — LanceDB is the default and requires no additional setup
- Appearance — Customize the look and feel (optional)
- Admin Account — Create your administrator username and password
Creating Your First Workspace
- Click the "+ New Workspace" button in the sidebar
- Give your workspace a descriptive name (e.g., "Company Knowledge Base")
- Optionally set a custom system prompt
- Choose chat mode: Chat for conversational AI or Query for strict document-based answers
Configuring Your LLM Provider
Cloud-Based Providers (Recommended for VPS)
| Provider | Setup | Best For |
|---|---|---|
| OpenAI | Enter API key; select model | General purpose, strong reasoning |
| Anthropic | Enter API key; select model | Long context, nuanced analysis |
| Google Gemini | Enter API key; select model | Multimodal, large context windows |
| Groq | Enter API key; select model | Ultra-fast inference speeds |
| OpenRouter | Single API key, 100+ models | Maximum flexibility |
Local LLM with Ollama (Optional)
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model (example: Llama 3.1 8B)
ollama pull llama3.1:8bIn AnythingLLM settings, select "Ollama" as the LLM provider and set the URL to http://host.docker.internal:11434.
💡 Docker Networking Note: On Linux, add --add-host=host.docker.internal:host-gateway to your docker run command when connecting to host services like Ollama.
Working with Documents & RAG
Supported Document Types
Uploading Documents
- Navigate to your workspace
- Click the upload icon or drag and drop files
- AnythingLLM processes and chunks the documents automatically
- Move documents into the workspace to make them available for chat
Chat Modes
Chat Mode
Documents are supplementary context alongside the LLM's own knowledge. Answers even if documents don't contain relevant info.
Query Mode
Only answers based on document content. Ideal for strict knowledge base applications where accuracy is critical.
Setting Up AI Agents
Built-in Agent Skills
- Web Browsing — Browse the internet and extract information
- Web Scraping — Collect and analyze data from websites
- Document Summarization — Automatically summarize uploaded documents
- Chart Generation — Create visual charts from data
- SQL Queries — Query databases through natural language
- File Operations — Read and write files within the workspace
Agent Flows (No-Code Automation)
Build multi-step automation pipelines without code. Chain together web scrapers, API calls, LLM instructions, and file operations. For example, create a flow that scrapes industry news, summarizes it, and saves a daily briefing.
Custom Agent Skills
Developers can extend agent capabilities with custom JavaScript skills. Each skill includes a plugin.json manifest and a handler.js file for calling external APIs and integrating with existing tools.
Multi-User Configuration
User Roles
| Role | Permissions |
|---|---|
| Admin | Full system access; manage users, workspaces, settings, LLM configuration |
| Manager | Create/manage workspaces; upload documents; manage users within assigned workspaces |
| Default | Chat within assigned workspaces; cannot modify settings or manage documents |
Inviting Users
- Go to Settings → User Management in the admin panel
- Click "Add User" and enter the new user's username
- Assign a role (Admin, Manager, or Default)
- Set workspace permissions to control access
- Share the login URL and credentials with the user
Secure with Nginx & SSL
sudo apt install -y nginx certbot python3-certbot-nginxserver {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Important for large document uploads
client_max_body_size 100M;
}
}sudo ln -s /etc/nginx/sites-available/anythingllm \
/etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
# Obtain and install SSL certificate
sudo certbot --nginx -d your-domain.com✅ SSL Auto-Renewal: Certbot automatically sets up a systemd timer. Verify with: sudo certbot renew --dry-run
Backup & Maintenance
Environment Variable Reference
| Variable | Description |
|---|---|
| STORAGE_DIR | Path to persistent storage inside the container |
| SERVER_PORT | Port AnythingLLM listens on (default: 3001) |
| DISABLE_TELEMETRY | Set to "true" to opt out of anonymous usage telemetry |
| JWT_SECRET | Custom secret for JWT token generation (auto-generated if not set) |
| AUTH_TOKEN | Password protect your instance with a single shared token |
| PASSWORDMINCHAR | Minimum password length for user accounts (default: 8) |
🔒 Privacy Tip: To disable all telemetry, add DISABLE_TELEMETRY="true" to your .env file before starting the container. You can also disable telemetry through in-app settings under Privacy.
Troubleshooting
AnythingLLM Deployed Successfully!
Your private AI knowledge base is now running. Here are some next steps:
- Explore Agent Flows to automate research and reporting tasks
- Set up embeddable chat widgets for your website
- Connect additional LLM providers to compare responses
- Build custom agent skills for your team's tools and APIs
- Configure workspace-specific system prompts
- Join the AnythingLLM Discord community
