AI Framework Guide

    Deploying CrewAI

    CrewAI is the leading open-source Python framework for orchestrating autonomous, role-playing AI agents. Build multi-agent systems where specialized agents collaborate on complex tasks — from research automation to content generation — on RamNode's reliable VPS hosting.

    Multi-Agent
    Task Orchestration
    Custom Tools
    LLM Integration
    1

    Prerequisites

    Recommended Server Specifications

    Use CaseCPURAMStorage
    Development / Testing2 vCPUs4 GB30 GB SSD
    Small Production Crew4 vCPUs8 GB50 GB SSD
    Large Production Crew6+ vCPUs16 GB80 GB SSD
    Enterprise / High-throughput8+ vCPUs32 GB100+ GB SSD

    Software Requirements

    • Ubuntu 22.04 LTS or 24.04 LTS (recommended)
    • Python 3.10, 3.11, 3.12, or 3.13
    • uv package manager (recommended) or pip
    • An LLM API key (OpenAI, Anthropic, Google, Ollama, etc.)
    • Git (for project management)

    Note: CrewAI is a Python-based orchestration framework. AI processing is handled by external LLM APIs (or local models via Ollama). Your VPS does not need a GPU unless running local models.

    2

    Initial Server Setup

    Connect and secure your server
    ssh root@YOUR_SERVER_IP
    
    # Update system packages
    apt update && apt upgrade -y
    
    # Create a non-root user
    adduser crewai
    usermod -aG sudo crewai
    
    # Set up SSH key authentication
    mkdir -p /home/crewai/.ssh
    cp ~/.ssh/authorized_keys /home/crewai/.ssh/
    chown -R crewai:crewai /home/crewai/.ssh
    chmod 700 /home/crewai/.ssh
    chmod 600 /home/crewai/.ssh/authorized_keys
    Configure firewall
    ufw allow OpenSSH
    ufw enable
    
    # Optionally allow HTTP/HTTPS if serving a web interface
    ufw allow 80/tcp
    ufw allow 443/tcp
    Install Python
    # Switch to the crewai user
    su - crewai
    
    # Check Python version
    python3 --version
    
    # If Python 3.10+ is not installed:
    sudo apt install -y python3 python3-pip python3-venv
    
    # Verify version (must be 3.10 - 3.13)
    python3 --version
    3

    Install CrewAI

    Option A — Install with uv (Recommended)

    Install uv and CrewAI CLI
    # Install uv
    curl -LsSf https://astral.sh/uv/install.sh | sh
    
    # Restart your shell or source the env
    source $HOME/.local/bin/env
    
    # Verify uv installation
    uv --version
    
    # Install CrewAI CLI
    uv tool install crewai
    
    # Update PATH if prompted
    uv tool update-shell
    
    # Verify CrewAI installation
    crewai version

    Option B — Install with pip

    Install with pip in virtual environment
    # Create a virtual environment
    python3 -m venv ~/crewai-env
    source ~/crewai-env/bin/activate
    
    # Install CrewAI and tools
    pip install crewai
    pip install 'crewai[tools]'
    
    # Verify installation
    crewai version

    Tip: CrewAI supports numerous extras: crewai[tools] for built-in tools, crewai[anthropic] for Claude, crewai[google-genai] for Gemini, and more. Install only what you need.

    4

    Create Your First CrewAI Project

    Scaffold a new project
    # Create a new CrewAI project
    crewai create crew my_research_crew
    
    # You'll be prompted to:
    #   - Select an LLM provider (OpenAI, Anthropic, Ollama, etc.)
    #   - Enter your API key
    
    # Navigate to the project
    cd my_research_crew

    Project Structure

    Directory layout
    my_research_crew/
    ├── .env                   # API keys and environment variables
    ├── pyproject.toml         # Project dependencies
    ├── src/
    │   └── my_research_crew/
    │       ├── config/
    │       │   ├── agents.yaml     # Agent definitions
    │       │   └── tasks.yaml      # Task definitions
    │       ├── crew.py             # Crew orchestration logic
    │       ├── main.py             # Entry point
    │       └── tools/              # Custom tool definitions
    └── tests/                 # Test files
    Configure environment variables (.env)
    # For OpenAI:
    OPENAI_API_KEY=sk-your-openai-api-key-here
    
    # Or for Anthropic Claude:
    ANTHROPIC_API_KEY=sk-ant-your-key-here
    
    # Or for local Ollama (no key needed):
    OPENAI_API_BASE=http://localhost:11434/v1
    OPENAI_MODEL_NAME=ollama/llama3

    Security Warning: Never commit .env files to version control. Add .env to your .gitignore. For production, use a secrets manager or environment variables injected at runtime.

    5

    Configure Agents and Tasks

    Define Your Agents

    config/agents.yaml
    researcher:
      role: Senior Research Analyst
      goal: >
        Discover and analyze the latest developments
        in {topic} with thorough, factual research
      backstory: >
        You are a seasoned research analyst with a keen
        eye for detail. You excel at finding and synthesizing
        information from multiple sources.
      verbose: true
    
    writer:
      role: Technical Content Writer
      goal: >
        Transform research findings into clear, engaging
        content that is accurate and well-structured
      backstory: >
        You are an experienced technical writer known for
        making complex topics accessible. You always cite
        your sources and maintain factual accuracy.
      verbose: true

    Define Your Tasks

    config/tasks.yaml
    research_task:
      description: >
        Conduct comprehensive research on {topic}.
        Identify key trends, major players, and recent
        developments. Provide detailed analysis with sources.
      expected_output: >
        A detailed research report with key findings,
        trends, and supporting data points.
      agent: researcher
    
    writing_task:
      description: >
        Using the research findings, write a comprehensive
        article on {topic}. Ensure the content is engaging,
        well-structured, and factually accurate.
      expected_output: >
        A polished article ready for publication with
        proper formatting and citations.
      agent: writer
    6

    Run Your Crew

    Install dependencies and execute
    # Install project dependencies
    crewai install
    
    # Run the crew
    crewai run
    
    # Or with custom inputs
    crewai run --inputs '{"topic": "AI agents in 2026"}'

    When execution completes, your crew's output will be displayed in the console and written to a report file in your project directory.

    Tip: CrewAI supports assigning different LLM models to different agents. Use a faster, cheaper model for simple research and a more capable model for writing. Configure this in crew.py with the llm parameter.

    7

    Production Configuration

    Running as a Systemd Service

    Create systemd service file
    # /etc/systemd/system/crewai.service
    [Unit]
    Description=CrewAI Agent Service
    After=network.target
    
    [Service]
    Type=simple
    User=crewai
    WorkingDirectory=/home/crewai/my_research_crew
    Environment=PATH=/home/crewai/.local/bin:/usr/bin
    EnvironmentFile=/home/crewai/my_research_crew/.env
    ExecStart=/home/crewai/.local/bin/crewai run
    Restart=on-failure
    RestartSec=30
    StandardOutput=journal
    StandardError=journal
    
    [Install]
    WantedBy=multi-user.target
    Enable and start the service
    sudo systemctl daemon-reload
    sudo systemctl enable crewai
    sudo systemctl start crewai
    
    # Check status
    sudo systemctl status crewai
    
    # View logs
    journalctl -u crewai -f

    Scheduling with Cron

    Schedule periodic crew execution
    # Edit crontab
    crontab -e
    
    # Run crew daily at 6 AM UTC
    0 6 * * * cd /home/crewai/my_research_crew && /home/crewai/.local/bin/crewai run >> /home/crewai/logs/crew.log 2>&1

    Environment Variable Security

    Secure your secrets
    # Restrict .env file permissions
    sudo chmod 600 /home/crewai/my_research_crew/.env
    sudo chown crewai:crewai /home/crewai/my_research_crew/.env
    8

    Using Local LLMs with Ollama (Optional)

    Run models locally instead of using cloud APIs to eliminate API costs and keep data on your server — but requires more server resources.

    Install Ollama and pull a model
    # Install Ollama
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Pull a model (llama3, mistral, etc.)
    ollama pull llama3
    
    # Verify it's running
    ollama list
    Configure CrewAI for Ollama (.env)
    OPENAI_API_BASE=http://localhost:11434/v1
    OPENAI_API_KEY=NA
    OPENAI_MODEL_NAME=ollama/llama3
    Or configure directly in crew.py
    from crewai import Agent, LLM
    
    llm = LLM(
        model="ollama/llama3",
        base_url="http://localhost:11434"
    )
    
    researcher = Agent(
        role="Research Analyst",
        goal="Find the latest information",
        backstory="Expert researcher",
        llm=llm
    )

    Resource Requirements: A 7B parameter model needs ~8 GB RAM minimum. For 13B+ models, you'll need 16–32 GB RAM. Consider RamNode's higher-tier VPS plans if running local models.

    9

    Adding Custom Tools

    Built-in Tools

    Install the tools package
    pip install 'crewai[tools]'
    ToolPurpose
    SerperDevToolWeb search via Serper API
    ScrapeWebsiteToolExtract content from web pages
    FileReadToolRead and parse local files
    PDFSearchToolRAG-based search on PDF documents
    GithubSearchToolSearch GitHub repositories

    Creating a Custom Tool

    tools/custom_tool.py
    from crewai import tool
    
    @tool("Database Query Tool")
    def query_database(query: str) -> str:
        """Execute a database query and return results."""
        import sqlite3
        conn = sqlite3.connect('data.db')
        cursor = conn.execute(query)
        results = cursor.fetchall()
        conn.close()
        return str(results)
    Assign tool to agent in crew.py
    from tools.custom_tool import query_database
    
    data_agent = Agent(
        role="Data Analyst",
        goal="Analyze data from the database",
        tools=[query_database],
        verbose=True
    )
    10

    Monitoring and Logging

    Enable verbose logging
    # In agents.yaml, set verbose: true
    # Or in crew.py:
    agent = Agent(
        role="Analyst",
        verbose=True   # Enables detailed logging
    )
    Log to file
    # Create logs directory
    mkdir -p /home/crewai/logs
    
    # Run with logging
    crewai run >> /home/crewai/logs/crew-$(date +%Y%m%d).log 2>&1
    Monitor resource usage
    # Real-time resource monitoring
    htop
    
    # Check memory usage
    free -h
    
    # Check disk usage
    df -h

    API Cost Monitoring: Track your LLM API usage carefully. CrewAI agents can make many API calls during a single crew execution. Set spending limits on your API provider's dashboard and use CrewAI's built-in callbacks to log token usage per task.

    11

    Troubleshooting

    IssueSolution
    Python version mismatchCrewAI requires Python 3.10–3.13. Use pyenv to manage multiple versions.
    ModuleNotFoundError for crewaiEnsure your virtual environment is activated or uv tool path is in your shell PATH.
    API key errorsVerify .env file syntax (no quotes around values). Ensure the key is valid and has sufficient credits.
    Out of memory during executionUpgrade your VPS plan, reduce concurrent agents, or use smaller LLM models.
    Connection timeouts to LLM APIsCheck firewall rules (UFW). Ensure outbound HTTPS (port 443) is not blocked.
    Crew hangs or loops indefinitelySet max_iter on agents to limit iteration loops. Use guardrails and timeouts.
    YAML syntax errorsValidate YAML files with a linter. Ensure proper indentation (2 spaces, no tabs).
    12

    Next Steps

    • Add more agents: Introduce specialized agents for editing, fact-checking, data visualization, or code generation.
    • Implement Flows: Use CrewAI Flows for event-driven orchestration with conditional logic, branching, and state management.
    • Build a web interface: Pair CrewAI with a FastAPI or Flask backend to trigger crews via HTTP endpoints.
    • Set up CI/CD: Use Git-based deployment workflows to test and deploy crew updates automatically.
    • Scale with Docker: Containerize your CrewAI project for reproducible deployments across multiple instances.
    • Explore MCP integration: Connect agents to external services via the Model Context Protocol.

    Useful Resources

    CrewAI Deployed Successfully!

    Your multi-agent AI framework is now running on RamNode. Configure your agents, define tasks, and start building collaborative AI workflows.