OpenClaw on Your VPS Series
    Part 1 of 6

    What It Is and What You Need

    OpenClaw's architecture, why a VPS is the right deployment target, hardware requirements, and a roadmap of this series.

    15 minutes
    Conceptual overview

    Want a condensed, single-page walkthrough? See the OpenClaw Quick-Start Guide for a streamlined setup.

    OpenClaw is an open-source, self-hosted AI agent that connects a large language model to the messaging apps you already use — Telegram, Discord, Slack, WhatsApp, Signal, and over a dozen others — and then gives that agent persistent memory, browser control, file access, shell execution, and a scheduling engine. You run it on your own infrastructure, you keep your own data, and you only pay for the API calls you make to your chosen LLM provider.

    The project launched in late 2025 under the name Clawdbot, went through a brief rename to Moltbot after a trademark dispute, and settled on OpenClaw in January 2026. It passed 250,000 GitHub stars by March 2026, making it one of the fastest-growing open-source repositories in the platform's history.

    Understanding the Architecture

    OpenClaw has one central component: the Gateway. Everything else connects to it.

    The Gateway is a Node.js daemon that runs continuously on your server. It listens for inbound messages on connected channels, routes them to an agent session, calls out to your configured LLM, optionally invokes tools or skills, and delivers the response back through whatever channel the message came in on.

    The Gateway also serves the Control UI — a web dashboard accessible on port 18789 by default that lets you manage channels, install skills, review agent memory, and monitor live sessions.

    Connected to the Gateway are:

    • Channels — the messaging platforms where you interact with the agent (Telegram, Discord, Slack, WhatsApp, etc.)
    • Agents — individual assistant configurations, each with their own system prompt, memory, model settings, and skill access
    • Skills — capability packs that extend what the agent can do, from sending emails to querying databases to controlling a browser
    • Nodes — optional device connections (iOS, Android, macOS) that give the agent access to camera, location, and notifications on those devices

    On a VPS deployment, you will not be using the mobile node features. What you get instead is a stable, always-on Gateway that answers your messages whether your laptop is open or not.

    Why a VPS Is the Right Choice

    Running OpenClaw on a laptop works for testing. It does not work for anything you actually want to rely on. Three problems kill a laptop deployment:

    Availability. The laptop goes to sleep, the Gateway goes offline, and your 3 AM cron job that was supposed to draft a briefing from overnight emails simply does not run.

    Security isolation. OpenClaw has shell access, browser control, and reads files on the machine it runs on. Running it on your daily driver means a misconfigured skill has access to your entire home directory, SSH keys, and browser sessions. A VPS is a contained blast radius.

    Stable networking. Webhooks and channel connections expect a consistent public IP and open inbound ports. A laptop behind a home router requires tunneling or dynamic DNS. A VPS just works.

    VPS Requirements

    OpenClaw itself is not resource-heavy. The minimum requirements are 2 GB RAM and any modern CPU. In practice, you want more headroom because the agent may spawn browser instances for web research tasks.

    Recommended Minimum (Personal Setup)

    • 2 vCPU
    • 4 GB RAM
    • 40 GB SSD storage
    • Ubuntu 22.04 LTS or 24.04 LTS
    • A static IPv4 address

    🧠 Running Local LLMs via Ollama?

    If you plan to run local LLMs alongside OpenClaw, you'll need 8+ GB RAM (16 GB recommended for 7B-13B models) and 80+ GB storage for model files. This is covered in Part 4 of this series.

    For most personal and small-team deployments, a 4 GB RAM plan is the right starting point.

    Required Software Dependencies

    OpenClaw's primary dependency is Node.js 24 (recommended) or Node.js 22.16 at minimum. Earlier Node.js versions produce silent failures that are difficult to debug.

    You will also need:

    • npm or pnpm (for the global install)
    • Git (for workspace initialization and skill management)
    • UFW (firewall — already available on Ubuntu)
    • Nginx (reverse proxy for the Control UI — covered in Part 2)
    • Certbot (SSL certificate management)
    • fail2ban (SSH and service brute-force protection)

    The Docker-based install path bundles most of these dependencies into containers, which is an alternative covered in Part 2.

    LLM Provider Options

    OpenClaw supports direct connections to Anthropic, OpenAI, Google Gemini, and a range of other providers. It also supports OpenRouter, an API gateway that routes requests to 300+ models through a single API key — the recommended path for beginners.

    For the best agent behavior, the official documentation and community consensus both point to Claude (Anthropic) as the strongest option for following complex instructions and resisting prompt injection from malicious skill content.

    If you want full data privacy with no API calls leaving your network, Part 4 covers running Ollama locally on the same VPS.

    What This Series Covers

    • Part 1 (this article): Architecture overview and VPS planning
    • Part 2: Installation and security hardening
    • Part 3: Messaging channel integrations (Telegram, Discord, Slack, WhatsApp)
    • Part 4: LLM configuration and model strategy
    • Part 5: Skills, cron jobs, and automation workflows
    • Part 6: Advanced use cases and integrations

    Every section assumes you are starting from a fresh Ubuntu VPS with root access and a domain name pointed at the server's IP. Let's move to the install.