AnythingLLM — No-Code AI App Builder
Build custom AI workspaces and chatbots without writing code.
Completed Parts 1–3 (Ollama + Open WebUI + RAG pipeline)
25–35 minutes
4GB ($20/mo) minimum. 8GB ($40/mo) for multiple concurrent workspaces
Introduction
Not everyone on your team writes code, but everyone needs AI. AnythingLLM provides a workspace-based interface where non-developers can create custom AI applications: HR chatbots, customer support agents, document Q&A systems, and more — all running on your private infrastructure.
AnythingLLM vs Open WebUI
| Feature | Open WebUI | AnythingLLM |
|---|---|---|
| Primary Use | ChatGPT replacement | Custom AI app builder |
| Target User | Everyone (general chat) | Teams building purpose-built tools |
| RAG | Per-conversation uploads | Persistent per-workspace knowledge |
| Workspaces | Conversation-based | Dedicated workspace isolation |
| Embeddable | No | Yes — embed chatbots in other sites |
They complement each other: Open WebUI for daily AI chat, AnythingLLM for purpose-built AI tools. Both share the same Ollama backend.
Deploying AnythingLLM
mkdir -p ~/ai-stack/anythingllm && cd ~/ai-stack/anythingllmversion: "3.8"
services:
anythingllm:
image: mintplexlabs/anythingllm:latest
container_name: anythingllm
restart: unless-stopped
ports:
- "3001:3001"
environment:
- LLM_PROVIDER=ollama
- OLLAMA_BASE_PATH=http://host.docker.internal:11434
- EMBEDDING_ENGINE=ollama
- EMBEDDING_MODEL_PREF=nomic-embed-text
- VECTOR_DB=qdrant
- QDRANT_ENDPOINT=http://host.docker.internal:6333
- QDRANT_API_KEY=your-qdrant-api-key-change-this
volumes:
- anythingllm-data:/app/server/storage
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
anythingllm-data:docker compose up -dAccess AnythingLLM at http://your-server-ip:3001.
Creating Your First Workspace
Let's build a "Company FAQ Bot" as an example:
- Click New Workspace and name it "Company FAQ"
- Select your Ollama model (e.g.,
mistral) - Set a system prompt: "You are a helpful company assistant. Answer questions based only on the provided company documents. If unsure, say so."
- Configure temperature to 0.3 for more factual responses
- Set context window to match your model's capability
Document Workspaces
Upload documents directly into workspaces. AnythingLLM handles chunking and embedding internally using your configured Ollama embedding model and Qdrant vector store.
When to use each approach:
- AnythingLLM built-in: Non-technical users uploading docs through the UI. Quick, no coding required.
- Part 3 RAG pipeline: Large-scale batch ingestion, programmatic document processing, custom chunking strategies.
Multi-Workspace Strategy
Organize workspaces by team function:
By Department
HR (policies, onboarding), Engineering (technical docs, runbooks), Sales (product info, competitive analysis)
By Function
Code Review Bot, Doc Writer, Data Analyst, Meeting Summarizer
By Project
Each project gets its own workspace with relevant docs, specs, and context
API & Embedding
Embed AnythingLLM chatbots into internal tools or documentation sites:
<script
data-embed-id="your-workspace-id"
data-base-api-url="http://your-server-ip:3001/api/embed"
src="http://your-server-ip:3001/embed/anythingllm-chat-widget.min.js">
</script>Generate API keys from Settings → API Keys for programmatic access to any workspace.
What's Next?
Your non-technical team members can now build their own AI tools — no API keys, no per-user fees, no data leaving your VPS. In Part 5: Tabby, we bring AI directly into your IDE with self-hosted code completion.
