Prerequisites
RamNode VPS Requirements
Flowise is a Node.js application that is relatively lightweight but benefits from adequate RAM, especially when running multiple AI workflows or connecting to embedding models.
| Plan | vCPUs | RAM | Storage | Best For |
|---|---|---|---|---|
| Standard 2GB | 1 vCPU | 2 GB | 30 GB NVMe | Testing & light workflows |
| Standard 4GB | 2 vCPUs | 4 GB | 60 GB NVMe | Production use, multiple flows |
| Standard 8GB | 4 vCPUs | 8 GB | 120 GB NVMe | Heavy RAG workflows, multi-agent |
💡 Recommendation: For most Flowise deployments, the Standard 4GB plan provides the best balance of performance and cost. If you plan to use local embedding models or heavy document processing, consider the 8GB plan.
Software Requirements
- A RamNode VPS running Ubuntu 22.04 or 24.04 LTS
- A registered domain name with DNS pointed to your VPS IP address
- SSH access to your server (root or sudo user)
- Basic familiarity with the Linux command line
Initial Server Setup
ssh root@your-server-ip
apt update && apt upgrade -yCreate a Non-Root User
adduser flowise
usermod -aG sudo flowise
su - flowisesudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw statusInstall Docker & Docker Compose
# Install prerequisites
sudo apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt update
sudo apt install -y docker-ce docker-ce-cli \
containerd.io docker-compose-plugin
# Configure Docker for non-root user
sudo usermod -aG docker flowise
newgrp docker
# Verify installation
docker --version
docker compose versionDeploy Flowise with Docker Compose
mkdir -p ~/flowise && cd ~/flowiseCreate the Environment File
Create a .env file with your configuration. Replace placeholder values with your own secure credentials:
# ── Flowise Configuration ──
PORT=3000
FLOWISE_USERNAME=admin
FLOWISE_PASSWORD=your-secure-password-here
FLOWISE_SECRETKEY_OVERWRITE=your-secret-key-here
APIKEY_PATH=/root/.flowise
SECRETKEY_PATH=/root/.flowise
LOG_PATH=/root/.flowise/logs
BLOB_STORAGE_PATH=/root/.flowise/storage
# ── Database Configuration (PostgreSQL) ──
DATABASE_TYPE=postgres
DATABASE_PORT=5432
DATABASE_HOST=postgres
DATABASE_NAME=flowise
DATABASE_USER=flowise
DATABASE_PASSWORD=your-db-password-here
# ── Postgres Container Settings ──
POSTGRES_USER=flowise
POSTGRES_PASSWORD=your-db-password-here
POSTGRES_DB=flowiseSecurity Notice: Always use strong, unique passwords. Never commit the .env file to version control. The FLOWISE_SECRETKEY_OVERWRITE is used to encrypt stored API keys — if lost, all saved credentials become unrecoverable.
Create Docker Compose File
version: "3.8"
services:
flowise:
image: flowiseai/flowise:latest
container_name: flowise
restart: always
env_file:
- .env
ports:
- "3000:3000"
volumes:
- flowise_data:/root/.flowise
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 3
postgres:
image: postgres:16-alpine
container_name: flowise-postgres
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
volumes:
flowise_data:
postgres_data:docker compose up -d
# Verify both containers are running
docker compose ps
docker compose logs -f flowiseFlowise should now be accessible at http://your-server-ip:3000. You'll see the login screen prompting for the credentials you set in the .env file.
Configure Nginx & SSL
sudo apt install -y nginx certbot python3-certbot-nginxserver {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Increase timeouts for long-running AI inference requests
proxy_read_timeout 300s;
proxy_send_timeout 300s;
# Increase body size for file uploads
client_max_body_size 50M;
}
}# Enable the site
sudo ln -s /etc/nginx/sites-available/flowise /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
# Test configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
# Obtain and install SSL certificate
sudo certbot --nginx -d yourdomain.com💡 Auto-Renewal: Let's Encrypt certificates expire every 90 days. Certbot installs a systemd timer that handles renewal automatically. Verify it's active with: sudo systemctl status certbot.timer
Post-Deployment Configuration
Accessing the Dashboard
Navigate to https://yourdomain.com in your browser. Log in using the FLOWISE_USERNAME and FLOWISE_PASSWORD credentials you set in the .env file.
Adding API Keys
- Click the Settings icon in the sidebar
- Navigate to "Credentials" and click "Add New"
- Select your provider (OpenAI, Anthropic, Google, etc.)
- Enter your API key and save
Flowise encrypts stored credentials using the FLOWISE_SECRETKEY_OVERWRITE value from your .env file.
Building Your First Flow
- Click "Chatflows" in the sidebar, then "Add New"
- Drag a Chat Model node (e.g., ChatOpenAI) onto the canvas
- Drag a Chain node (e.g., Conversation Chain) and connect them
- Add a Memory node (e.g., Buffer Memory) for conversation context
- Click the chat icon to test your flow inline
- Click "Save" and use the "Share Chatbot" button to embed it
Environment Variables Reference
Flowise supports a comprehensive set of environment variables for fine-tuning your deployment:
| Variable | Description | Default |
|---|---|---|
| FLOWISE_FILE_SIZE_LIMIT | Max upload file size in MB | 50 |
| NUMBER_OF_PROXIES | Rate limit: number of proxies | — |
| CORS_ORIGINS | Allowed CORS origins (comma-separated) | * |
| IFRAME_ORIGINS | Allowed iframe embedding origins | * |
| TOOL_FUNCTION_BUILTIN_DEP | Allowed built-in Node.js modules for custom tools | — |
| TOOL_FUNCTION_EXTERNAL_DEP | Allowed npm packages for custom tools | — |
| LOG_LEVEL | Logging verbosity (error, info, verbose, debug) | info |
| DEBUG | Enable debug mode | false |
| DISABLE_CHATFLOW_REUSE | Prevent chatflow instance reuse | false |
💡 External Dependencies: If your custom tool functions require npm packages (like cheerio or axios), add them to TOOL_FUNCTION_EXTERNAL_DEP.
Maintenance & Operations
Security Hardening
Restrict Direct Port Access
After configuring Nginx, block direct access to port 3000 so all traffic goes through the reverse proxy:
sudo ufw deny 3000Then update docker-compose.yml to bind port 3000 only to localhost:
# Change this line in docker-compose.yml:
ports:
- "127.0.0.1:3000:3000"Enable Rate Limiting in Nginx
# Add to the http block in /etc/nginx/nginx.conf
limit_req_zone $binary_remote_addr zone=flowise:10m rate=10r/s;
# Add inside the location block
limit_req zone=flowise burst=20 nodelay;Set Up Fail2Ban
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2banSecurity Checklist
| Item | Status | Action |
|---|---|---|
| SSH key authentication | Required | Disable password auth in sshd_config |
| Firewall (UFW) | Required | Allow only 22, 80, 443 |
| HTTPS (Let's Encrypt) | Required | Certbot auto-renewal enabled |
| Flowise auth enabled | Required | Set FLOWISE_USERNAME & PASSWORD |
| Port 3000 blocked | Recommended | Bind to localhost only |
| Fail2Ban active | Recommended | Auto-block brute force |
| Rate limiting | Recommended | Nginx limit_req configured |
| Automatic backups | Recommended | Cron + pg_dump daily |
Troubleshooting
Flowise Deployed Successfully!
Your self-hosted AI agent builder is now running. Here are some next steps:
- Build RAG chatbots with document loaders and vector stores (Pinecone, Qdrant, Chroma)
- Create multi-agent workflows for complex tasks
- Embed the Flowise chat widget on your website
- Expose chatflows as REST API endpoints
- Set up queue mode with Redis and BullMQ for high-traffic deployments
- Explore MCP integration for connecting agents to external tools
