n8n + Ollama — AI-Powered Automation
Zero API costs, infinite automation possibilities.
Completed Part 1 (Ollama), Docker, basic automation concepts
35–45 minutes
4GB ($20/mo) for light automation. 8GB ($40/mo) for production workflows
Looking for a quick-start guide? Check out our standalone n8n Deployment Guide for a streamlined setup walkthrough.
Introduction
n8n is a self-hosted workflow automation platform — think Zapier, but private and free. Combined with Ollama, you get AI-powered automation with zero per-execution API costs. Email triage, content generation, data extraction, sentiment analysis — all on your $40/month VPS.
💰 Cost comparison: Zapier AI Workflows: $50+/month with per-task limits. n8n + Ollama on your VPS: $0/execution — unlimited workflows, unlimited AI calls.
Deploying n8n
mkdir -p ~/ai-stack/n8n && cd ~/ai-stack/n8nversion: "3.8"
services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=admin
- N8N_BASIC_AUTH_PASSWORD=your-secure-password
- N8N_HOST=0.0.0.0
- WEBHOOK_URL=http://your-server-ip:5678/
- GENERIC_TIMEZONE=UTC
volumes:
- n8n-data:/home/node/.n8n
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
n8n-data:docker compose up -dAccess n8n at http://your-server-ip:5678.
n8n + Ollama Connection
Configure n8n's AI nodes to use your local Ollama:
- In n8n, create a new workflow
- Add an "Ollama Chat Model" node
- Set the Base URL to
http://host.docker.internal:11434 - Select your model (e.g.,
mistral) - Configure temperature (0.3 for structured tasks, 0.7 for creative)
- Test the connection — you should see a successful response
Workflow 1: Intelligent Email Triage
Build a workflow that classifies and routes incoming emails:
IMAP Trigger (new email)
│
▼
Ollama Chat Model
System: "Classify this email as: URGENT, ACTION_REQUIRED,
INFORMATIONAL, or SPAM. Also suggest a brief response."
User: "Subject: {{subject}}
Body: {{body}}"
│
▼
Switch Node (route by classification)
├── URGENT → Slack notification + Draft reply
├── ACTION_REQUIRED → Create task in Todoist/Notion
├── INFORMATIONAL → Archive + Summary log
└── SPAM → Move to trashEach node connects visually in n8n's canvas editor. The Ollama node processes the email content and returns structured classification.
Workflow 2: Content Generation Pipeline
Webhook Trigger (topic input)
│
▼
Ollama #1: Generate blog outline
│
▼
Ollama #2: Write draft from outline
│
▼
Ollama #3: Review and improve draft
│
▼
Save to Google Docs / Markdown file
│
▼
Send notification (email/Slack)Chain multiple Ollama calls for iterative refinement. Each AI step receives the output of the previous step, progressively improving the content.
Workflow 3: Document Processing
Watch Folder (new files)
│
▼
Read File → Extract Text
│
▼
Ollama: "Summarize this document in 3 bullet points.
Extract any action items or deadlines."
│
▼
Store summary in database
│
▼
(Optional) Feed to RAG pipeline (Part 3)
│
▼
Send notification with summaryConnect to the RAG pipeline from Part 3 for automatic document ingestion into your knowledge base.
Advanced Patterns
Error Handling
Add retry logic on AI nodes (3 retries with exponential backoff). Ollama may timeout on long generations — set generous timeout values (120s+).
Resource Contention
When multiple workflows hit Ollama simultaneously, queue them. Use n8n's concurrency settings to limit parallel AI executions to 1–2 at a time.
Human Approval Gates
Add "Wait for Approval" nodes before destructive actions. Send a preview via email/Slack and wait for human confirmation before proceeding.
Scheduling
Use cron triggers for recurring AI tasks: daily report generation, weekly competitor analysis, monthly performance summaries.
What's Next?
Every workflow you've built runs at zero marginal cost — no per-API-call billing, no usage caps. In Part 8: Production Hardening, we lock everything down with a reverse proxy, centralized auth, automated backups, and resource tuning.
