Architecture Overview
Single-server architecture: Filebeat agents ship logs → Logstash processes them → Elasticsearch indexes → Kibana visualizes.
| Component | Default Port | Role |
|---|---|---|
| Elasticsearch | 9200, 9300 | Search & indexing engine |
| Logstash | 5044 | Data processing pipeline |
| Kibana | 5601 | Visualization & UI |
| Filebeat | N/A (shipper) | Lightweight log forwarder |
Prerequisites & Recommended Plans
Minimum Requirements
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 4 GB | 8 GB+ |
| CPU | 2 vCPUs | 4 vCPUs |
| Storage | 40 GB SSD | 80 GB+ NVMe |
| OS | Ubuntu 22.04 LTS | Ubuntu 24.04 LTS |
💡 Recommendation: For production workloads processing more than a few GB of logs per day, opt for an 8 GB RAM plan or higher. RamNode's NVMe storage is ideal for Elasticsearch's I/O-heavy indexing.
Software Prerequisites
- Ubuntu 22.04 or 24.04 LTS (fresh installation)
- A non-root sudo user
- A registered domain name (for HTTPS access to Kibana)
- DNS A record pointing your domain to the VPS IP address
- Java 17 (bundled with Elasticsearch 8.x)
Initial Server Setup
sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https curl gnupg2 wget software-properties-commonConfigure Swap (If RAM < 8 GB)
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo "/swapfile none swap sw 0 0" | sudo tee -a /etc/fstab
# Minimize swap usage for Elasticsearch
echo "vm.swappiness=1" | sudo tee -a /etc/sysctl.conf && sudo sysctl -pSet System Limits
elasticsearch - nofile 65535
elasticsearch - nproc 4096
elasticsearch - memlock unlimitedecho "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
sudo sysctl -psudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enableImportant: Ports 9200, 9300, 5044, and 5601 should NOT be exposed publicly. Kibana will be accessed through the Nginx reverse proxy on port 443.
Add the Elastic APT Repository
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | \
sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] \
https://artifacts.elastic.co/packages/8.x/apt stable main" | \
sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt updateInstalling Elasticsearch
sudo apt install -y elasticsearchSave the output! The installation generates a superuser password and enrollment token. You will need these for Kibana enrollment and API access.
Configure Elasticsearch
# ── Cluster ──
cluster.name: elk-ramnode
node.name: elk-node-1
# ── Paths ──
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# ── Network ──
network.host: 127.0.0.1
http.port: 9200
# ── Discovery (single-node) ──
discovery.type: single-node
# ── Security (enabled by default in 8.x) ──
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12Configure JVM Heap Size
Set JVM heap to no more than 50% of available RAM (never exceed 31 GB). For a 4 GB VPS, allocate 2 GB; for 8 GB, allocate 4 GB.
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
# Verify cluster health (enter elastic password when prompted)
curl -k -u elastic https://localhost:9200
curl -k -u elastic https://localhost:9200/_cluster/health?prettyYou should see a JSON response with cluster status "green" (or "yellow" for single-node, which is expected).
Installing Logstash
sudo apt install -y logstashBeats Input Pipeline
input {
beats {
port => 5044
ssl_enabled => true
ssl_certificate_authorities => ["/etc/logstash/certs/ca.crt"]
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.key"
}
}Syslog Filter Pipeline
filter {
if [fileset][module] == "system" {
if [fileset][name] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
}Elasticsearch Output Pipeline
output {
elasticsearch {
hosts => ["https://localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
user => "logstash_internal"
password => "YOUR_LOGSTASH_PASSWORD"
ssl_certificate_verification => true
cacert => "/etc/logstash/certs/ca.crt"
}
}Create Logstash User in Elasticsearch
curl -k -u elastic -X POST "https://localhost:9200/_security/user/logstash_internal" \
-H 'Content-Type: application/json' -d '{
"password": "YOUR_LOGSTASH_PASSWORD",
"roles": ["logstash_writer"],
"full_name": "Logstash Internal User"
}'sudo systemctl enable logstash
sudo systemctl start logstash
# Verify
sudo systemctl status logstash
sudo tail -f /var/log/logstash/logstash-plain.logInstalling Kibana
sudo apt install -y kibana
# Generate enrollment token (if the original expired)
sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
# Run the Kibana setup tool
sudo /usr/share/kibana/bin/kibana-setup --enrollment-token <YOUR_TOKEN>server.port: 5601
server.host: "127.0.0.1"
server.publicBaseUrl: "https://elk.yourdomain.com"
# elasticsearch.hosts set automatically by enrollment
# elasticsearch.ssl.certificateAuthorities set automaticallysudo systemctl enable kibana
sudo systemctl start kibana
# Verify Kibana is listening
curl -s http://localhost:5601/api/status | head -c 200Configuring Filebeat
Filebeat is a lightweight log shipper. Install it on each server you want to monitor.
sudo apt install -y filebeat# ── Filebeat Inputs ──
filebeat.inputs:
- type: filestream
id: syslog
enabled: true
paths:
- /var/log/syslog
- /var/log/auth.log
# ── Output to Logstash ──
output.logstash:
hosts: ["YOUR_ELK_SERVER_IP:5044"]
ssl.certificate_authorities:
- "/etc/filebeat/certs/ca.crt"
ssl.certificate: "/etc/filebeat/certs/filebeat.crt"
ssl.key: "/etc/filebeat/certs/filebeat.key"
# ── Modules ──
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: falsesudo filebeat modules enable system
sudo filebeat setup --index-management \
-E output.logstash.enabled=false \
-E 'output.elasticsearch.hosts=["https://YOUR_ELK_SERVER_IP:9200"]' \
-E 'output.elasticsearch.username="elastic"' \
-E 'output.elasticsearch.password="YOUR_PASSWORD"' \
-E 'output.elasticsearch.ssl.certificate_authorities=["/etc/filebeat/certs/ca.crt"]'
sudo systemctl enable filebeat
sudo systemctl start filebeat
# Verify
sudo filebeat test outputSecurity & TLS Configuration
Generate Certificates for Logstash & Filebeat
sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert \
--ca /etc/elasticsearch/certs/http_ca.crt \
--ca-key /etc/elasticsearch/certs/http_ca.key \
--name logstash \
--out /tmp/logstash-certs.zip
sudo unzip /tmp/logstash-certs.zip -d /etc/logstash/certs/
sudo chown -R logstash:logstash /etc/logstash/certs/Reset Passwords (If Needed)
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_systemCreate Dedicated Roles & Users
curl -k -u elastic -X POST "https://localhost:9200/_security/role/logstash_writer" \
-H 'Content-Type: application/json' -d '{
"cluster": ["manage_index_templates", "monitor", "manage_ilm"],
"indices": [{
"names": ["logstash-*"],
"privileges": ["write", "create_index", "manage", "auto_configure"]
}]
}'curl -k -u elastic -X POST "https://localhost:9200/_security/role/analyst" \
-H 'Content-Type: application/json' -d '{
"indices": [{
"names": ["logstash-*", "filebeat-*"],
"privileges": ["read", "view_index_metadata"]
}],
"applications": [{
"application": "kibana-.kibana",
"privileges": ["feature_discover.read", "feature_dashboard.read"],
"resources": ["*"]
}]
}'Firewall Hardening
sudo ufw allow from REMOTE_SERVER_IP to any port 5044 proto tcpWarning: Never expose Elasticsearch port 9200 to the public internet. Always keep it bound to 127.0.0.1 or use SSH tunneling for remote access.
Nginx Reverse Proxy for Kibana
sudo apt install -y nginx certbot python3-certbot-nginxserver {
listen 80;
server_name elk.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
}sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
sudo certbot --nginx -d elk.yourdomain.com
# Verify auto-renewal
sudo certbot renew --dry-runPerformance Tuning
Elasticsearch Tuning
| Setting | Value | Purpose |
|---|---|---|
| indices.memory.index_buffer_size | 15% | Indexing buffer (default 10%) |
| thread_pool.write.queue_size | 1000 | Handles burst write loads |
| indices.queries.cache.size | 15% | Query cache allocation |
| bootstrap.memory_lock | true | Prevents heap swap-out |
Index Lifecycle Management (ILM)
Automatically manage log retention and prevent disk exhaustion. This policy rolls over indices daily or at 50 GB, and deletes old data after 30 days.
curl -k -u elastic -X PUT "https://localhost:9200/_ilm/policy/logs-policy" \
-H 'Content-Type: application/json' -d '{
"policy": {
"phases": {
"hot": { "actions": { "rollover": { "max_age": "1d", "max_size": "50gb" } } },
"warm": { "min_age": "7d", "actions": { "shrink": { "number_of_shards": 1 } } },
"delete": { "min_age": "30d", "actions": { "delete": {} } }
}
}
}'Logstash Tuning
pipeline.workers: 2 # Match CPU cores
pipeline.batch.size: 250 # Increase for higher throughput
pipeline.batch.delay: 50 # ms to wait for batch to fillMonitoring & Maintenance
Troubleshooting
Common Issues & Solutions
| Symptom | Cause | Fix |
|---|---|---|
| ES won't start | Insufficient heap or file descriptors | Check jvm.options heap; verify limits.conf |
| Cluster status: red | Unassigned primary shards | Check _cluster/allocation/explain |
| Kibana: "server not ready" | ES connection failure | Verify ES is running; check SSL config |
| Logstash pipeline error | Grok pattern mismatch | Test at grokdebugger.com; check logs |
| Filebeat: connection refused | Firewall blocking 5044 | Add UFW rule for source IP |
| High memory / OOM kills | Heap too large or no swap | Reduce JVM heap; add swap; upgrade plan |
Key Log File Locations
| Component | Log Path |
|---|---|
| Elasticsearch | /var/log/elasticsearch/elk-ramnode.log |
| Logstash | /var/log/logstash/logstash-plain.log |
| Kibana | journalctl -u kibana -f |
| Filebeat | /var/log/filebeat/filebeat |
Useful Diagnostic Commands
# Check cluster health and node stats
curl -k -u elastic https://localhost:9200/_cluster/health?pretty
curl -k -u elastic https://localhost:9200/_nodes/stats?pretty
# Check shard allocation issues
curl -k -u elastic https://localhost:9200/_cluster/allocation/explain?pretty
# List all indices with size and doc count
curl -k -u elastic 'https://localhost:9200/_cat/indices?v&s=store.size:desc'
# Test Logstash configuration
sudo /usr/share/logstash/bin/logstash --config.test_and_exit \
-f /etc/logstash/conf.d/
# Verify Filebeat connectivity
sudo filebeat test config
sudo filebeat test outputELK Stack Deployed Successfully!
Your centralized log management and observability platform is now running. Access Kibana at your domain to explore logs, create dashboards, and set up alerts.
