Prerequisites
Recommended Server Specifications
| Use Case | CPU | RAM | Storage |
|---|---|---|---|
| Development / Testing | 2 vCPUs | 4 GB | 30 GB SSD |
| Small Production Crew | 4 vCPUs | 8 GB | 50 GB SSD |
| Large Production Crew | 6+ vCPUs | 16 GB | 80 GB SSD |
| Enterprise / High-throughput | 8+ vCPUs | 32 GB | 100+ GB SSD |
Software Requirements
- Ubuntu 22.04 LTS or 24.04 LTS (recommended)
- Python 3.10, 3.11, 3.12, or 3.13
- uv package manager (recommended) or pip
- An LLM API key (OpenAI, Anthropic, Google, Ollama, etc.)
- Git (for project management)
Note: CrewAI is a Python-based orchestration framework. AI processing is handled by external LLM APIs (or local models via Ollama). Your VPS does not need a GPU unless running local models.
Initial Server Setup
ssh root@YOUR_SERVER_IP
# Update system packages
apt update && apt upgrade -y
# Create a non-root user
adduser crewai
usermod -aG sudo crewai
# Set up SSH key authentication
mkdir -p /home/crewai/.ssh
cp ~/.ssh/authorized_keys /home/crewai/.ssh/
chown -R crewai:crewai /home/crewai/.ssh
chmod 700 /home/crewai/.ssh
chmod 600 /home/crewai/.ssh/authorized_keysufw allow OpenSSH
ufw enable
# Optionally allow HTTP/HTTPS if serving a web interface
ufw allow 80/tcp
ufw allow 443/tcp# Switch to the crewai user
su - crewai
# Check Python version
python3 --version
# If Python 3.10+ is not installed:
sudo apt install -y python3 python3-pip python3-venv
# Verify version (must be 3.10 - 3.13)
python3 --versionInstall CrewAI
Option A — Install with uv (Recommended)
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Restart your shell or source the env
source $HOME/.local/bin/env
# Verify uv installation
uv --version
# Install CrewAI CLI
uv tool install crewai
# Update PATH if prompted
uv tool update-shell
# Verify CrewAI installation
crewai versionOption B — Install with pip
# Create a virtual environment
python3 -m venv ~/crewai-env
source ~/crewai-env/bin/activate
# Install CrewAI and tools
pip install crewai
pip install 'crewai[tools]'
# Verify installation
crewai versionTip: CrewAI supports numerous extras: crewai[tools] for built-in tools, crewai[anthropic] for Claude, crewai[google-genai] for Gemini, and more. Install only what you need.
Create Your First CrewAI Project
# Create a new CrewAI project
crewai create crew my_research_crew
# You'll be prompted to:
# - Select an LLM provider (OpenAI, Anthropic, Ollama, etc.)
# - Enter your API key
# Navigate to the project
cd my_research_crewProject Structure
my_research_crew/
├── .env # API keys and environment variables
├── pyproject.toml # Project dependencies
├── src/
│ └── my_research_crew/
│ ├── config/
│ │ ├── agents.yaml # Agent definitions
│ │ └── tasks.yaml # Task definitions
│ ├── crew.py # Crew orchestration logic
│ ├── main.py # Entry point
│ └── tools/ # Custom tool definitions
└── tests/ # Test files# For OpenAI:
OPENAI_API_KEY=sk-your-openai-api-key-here
# Or for Anthropic Claude:
ANTHROPIC_API_KEY=sk-ant-your-key-here
# Or for local Ollama (no key needed):
OPENAI_API_BASE=http://localhost:11434/v1
OPENAI_MODEL_NAME=ollama/llama3Security Warning: Never commit .env files to version control. Add .env to your .gitignore. For production, use a secrets manager or environment variables injected at runtime.
Configure Agents and Tasks
Define Your Agents
researcher:
role: Senior Research Analyst
goal: >
Discover and analyze the latest developments
in {topic} with thorough, factual research
backstory: >
You are a seasoned research analyst with a keen
eye for detail. You excel at finding and synthesizing
information from multiple sources.
verbose: true
writer:
role: Technical Content Writer
goal: >
Transform research findings into clear, engaging
content that is accurate and well-structured
backstory: >
You are an experienced technical writer known for
making complex topics accessible. You always cite
your sources and maintain factual accuracy.
verbose: trueDefine Your Tasks
research_task:
description: >
Conduct comprehensive research on {topic}.
Identify key trends, major players, and recent
developments. Provide detailed analysis with sources.
expected_output: >
A detailed research report with key findings,
trends, and supporting data points.
agent: researcher
writing_task:
description: >
Using the research findings, write a comprehensive
article on {topic}. Ensure the content is engaging,
well-structured, and factually accurate.
expected_output: >
A polished article ready for publication with
proper formatting and citations.
agent: writerRun Your Crew
# Install project dependencies
crewai install
# Run the crew
crewai run
# Or with custom inputs
crewai run --inputs '{"topic": "AI agents in 2026"}'When execution completes, your crew's output will be displayed in the console and written to a report file in your project directory.
Tip: CrewAI supports assigning different LLM models to different agents. Use a faster, cheaper model for simple research and a more capable model for writing. Configure this in crew.py with the llm parameter.
Production Configuration
Running as a Systemd Service
# /etc/systemd/system/crewai.service
[Unit]
Description=CrewAI Agent Service
After=network.target
[Service]
Type=simple
User=crewai
WorkingDirectory=/home/crewai/my_research_crew
Environment=PATH=/home/crewai/.local/bin:/usr/bin
EnvironmentFile=/home/crewai/my_research_crew/.env
ExecStart=/home/crewai/.local/bin/crewai run
Restart=on-failure
RestartSec=30
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.targetsudo systemctl daemon-reload
sudo systemctl enable crewai
sudo systemctl start crewai
# Check status
sudo systemctl status crewai
# View logs
journalctl -u crewai -fScheduling with Cron
# Edit crontab
crontab -e
# Run crew daily at 6 AM UTC
0 6 * * * cd /home/crewai/my_research_crew && /home/crewai/.local/bin/crewai run >> /home/crewai/logs/crew.log 2>&1Environment Variable Security
# Restrict .env file permissions
sudo chmod 600 /home/crewai/my_research_crew/.env
sudo chown crewai:crewai /home/crewai/my_research_crew/.envUsing Local LLMs with Ollama (Optional)
Run models locally instead of using cloud APIs to eliminate API costs and keep data on your server — but requires more server resources.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model (llama3, mistral, etc.)
ollama pull llama3
# Verify it's running
ollama listOPENAI_API_BASE=http://localhost:11434/v1
OPENAI_API_KEY=NA
OPENAI_MODEL_NAME=ollama/llama3from crewai import Agent, LLM
llm = LLM(
model="ollama/llama3",
base_url="http://localhost:11434"
)
researcher = Agent(
role="Research Analyst",
goal="Find the latest information",
backstory="Expert researcher",
llm=llm
)Resource Requirements: A 7B parameter model needs ~8 GB RAM minimum. For 13B+ models, you'll need 16–32 GB RAM. Consider RamNode's higher-tier VPS plans if running local models.
Adding Custom Tools
Built-in Tools
pip install 'crewai[tools]'| Tool | Purpose |
|---|---|
| SerperDevTool | Web search via Serper API |
| ScrapeWebsiteTool | Extract content from web pages |
| FileReadTool | Read and parse local files |
| PDFSearchTool | RAG-based search on PDF documents |
| GithubSearchTool | Search GitHub repositories |
Creating a Custom Tool
from crewai import tool
@tool("Database Query Tool")
def query_database(query: str) -> str:
"""Execute a database query and return results."""
import sqlite3
conn = sqlite3.connect('data.db')
cursor = conn.execute(query)
results = cursor.fetchall()
conn.close()
return str(results)from tools.custom_tool import query_database
data_agent = Agent(
role="Data Analyst",
goal="Analyze data from the database",
tools=[query_database],
verbose=True
)Monitoring and Logging
# In agents.yaml, set verbose: true
# Or in crew.py:
agent = Agent(
role="Analyst",
verbose=True # Enables detailed logging
)# Create logs directory
mkdir -p /home/crewai/logs
# Run with logging
crewai run >> /home/crewai/logs/crew-$(date +%Y%m%d).log 2>&1# Real-time resource monitoring
htop
# Check memory usage
free -h
# Check disk usage
df -hAPI Cost Monitoring: Track your LLM API usage carefully. CrewAI agents can make many API calls during a single crew execution. Set spending limits on your API provider's dashboard and use CrewAI's built-in callbacks to log token usage per task.
Troubleshooting
| Issue | Solution |
|---|---|
| Python version mismatch | CrewAI requires Python 3.10–3.13. Use pyenv to manage multiple versions. |
| ModuleNotFoundError for crewai | Ensure your virtual environment is activated or uv tool path is in your shell PATH. |
| API key errors | Verify .env file syntax (no quotes around values). Ensure the key is valid and has sufficient credits. |
| Out of memory during execution | Upgrade your VPS plan, reduce concurrent agents, or use smaller LLM models. |
| Connection timeouts to LLM APIs | Check firewall rules (UFW). Ensure outbound HTTPS (port 443) is not blocked. |
| Crew hangs or loops indefinitely | Set max_iter on agents to limit iteration loops. Use guardrails and timeouts. |
| YAML syntax errors | Validate YAML files with a linter. Ensure proper indentation (2 spaces, no tabs). |
Next Steps
- Add more agents: Introduce specialized agents for editing, fact-checking, data visualization, or code generation.
- Implement Flows: Use CrewAI Flows for event-driven orchestration with conditional logic, branching, and state management.
- Build a web interface: Pair CrewAI with a FastAPI or Flask backend to trigger crews via HTTP endpoints.
- Set up CI/CD: Use Git-based deployment workflows to test and deploy crew updates automatically.
- Scale with Docker: Containerize your CrewAI project for reproducible deployments across multiple instances.
- Explore MCP integration: Connect agents to external services via the Model Context Protocol.
Useful Resources
- CrewAI Documentation: docs.crewai.com
- CrewAI GitHub: github.com/crewAIInc/crewAI
- CrewAI Examples: github.com/crewAIInc/crewAI-examples
CrewAI Deployed Successfully!
Your multi-agent AI framework is now running on RamNode. Configure your agents, define tasks, and start building collaborative AI workflows.
