Why Elasticsearch on RamNode?
Elasticsearch is a distributed, RESTful search and analytics engine built on Apache Lucene. It provides near real-time search capabilities, making it ideal for log analytics, full-text search, security intelligence, and business analytics applications.
Prerequisites
| Component | Minimum | Recommended |
|---|---|---|
| CPU Cores | 2 vCPU | 4+ vCPU |
| RAM | 4 GB | 8+ GB |
| Storage | 50 GB SSD | 100+ GB NVMe |
| OS | Ubuntu 22.04 LTS | Ubuntu 24.04 LTS |
| Java | JDK 17+ | OpenJDK 21 |
Memory Matters: Elasticsearch is memory-intensive. The JVM heap should be set to 50% of available RAM (max 32GB). A 4GB VPS allows for a 2GB heap, suitable for small datasets. For production workloads, 8GB+ RAM is strongly recommended.
RamNode VPS Recommendations
- • Premium KVM VPS (4GB+ RAM) for development/testing
- • Premium KVM VPS (8GB+ RAM) for production single-node deployments
- • Multiple VPS instances across locations for high-availability clusters
Installation
Step 1: System Preparation
Connect to your RamNode VPS via SSH and update the system:
ssh root@your-vps-ip
apt update && apt upgrade -y
apt install -y apt-transport-https gnupg2 curlStep 2: Install Java (OpenJDK)
Elasticsearch 8.x bundles its own JDK, but installing OpenJDK provides flexibility:
apt install -y openjdk-17-jdk
java -versionStep 3: Add Elasticsearch Repository
Import the Elasticsearch GPG key and add the official repository:
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | \
gpg --dearmor -o /usr/share/keyrings/elastic.gpg
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] \
https://artifacts.elastic.co/packages/8.x/apt stable main" | \
tee /etc/apt/sources.list.d/elastic-8.x.listStep 4: Install Elasticsearch
Install Elasticsearch and save the auto-generated security credentials:
apt update
apt install -y elasticsearchImportant: During installation, Elasticsearch 8.x automatically generates a password for the 'elastic' superuser and an enrollment token. Copy and securely store these credentials immediately - they are only displayed once!
Configuration
Main Configuration File
Edit the primary Elasticsearch configuration:
nano /etc/elasticsearch/elasticsearch.ymlApply these essential settings for a single-node deployment:
# Cluster Settings
cluster.name: ramnode-elasticsearch
node.name: es-node-1
# Network Settings
network.host: 0.0.0.0
http.port: 9200
# Discovery (single-node mode)
discovery.type: single-node
# Paths
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# Security (enabled by default in 8.x)
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# TLS/SSL for HTTP
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# TLS/SSL for Transport
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12JVM Heap Configuration
Configure JVM heap size based on your VPS RAM (50% of total, max 32GB):
nano /etc/elasticsearch/jvm.options.d/heap.optionsFor a 4GB VPS
-Xms2g
-Xmx2gFor an 8GB VPS
-Xms4g
-Xmx4gSystem Limits
Elasticsearch requires specific system limits. Create a limits configuration:
nano /etc/security/limits.confAdd these lines:
elasticsearch soft nofile 65535
elasticsearch hard nofile 65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimitedConfigure virtual memory:
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
sysctl -pSecurity Hardening
Firewall Configuration
Configure UFW to restrict access to Elasticsearch:
# Enable UFW
ufw enable
# Allow SSH
ufw allow 22/tcp
# Allow Elasticsearch from specific IPs only
ufw allow from YOUR_APP_SERVER_IP to any port 9200
# For cluster nodes (if applicable)
ufw allow from CLUSTER_NODE_IP to any port 9300
# Check status
ufw status verbose| Port | Protocol | Purpose |
|---|---|---|
| 9200 | TCP | REST API / HTTP interface |
| 9300 | TCP | Node-to-node communication |
| 22 | TCP | SSH (admin access only) |
Security Best Practice: Never expose port 9200 to the public internet without authentication. Use a reverse proxy (nginx) with additional authentication, or restrict access to specific application server IPs via firewall rules.
Reset Elastic User Password
If you need to reset the elastic superuser password:
/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elasticCreate Application User
Create a dedicated user for your application with limited privileges:
# Create a role with specific index permissions
curl -k -X POST 'https://localhost:9200/_security/role/app_writer' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"indices": [{
"names": ["app-*"],
"privileges": ["read", "write", "create_index"]
}]
}'# Create user with that role
curl -k -X POST 'https://localhost:9200/_security/user/app_user' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"password": "secure_app_password",
"roles": ["app_writer"]
}'Starting Elasticsearch
Enable and Start Service
Configure Elasticsearch to start automatically and launch the service:
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearchVerify Installation
Check that Elasticsearch is running and responding:
# Check service status
systemctl status elasticsearch
# Test API response (with SSL)
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200
# Check cluster health
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_cluster/health?prettyExpected healthy response:
{
"cluster_name" : "ramnode-elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
...
}Basic Operations
Index Management
Create an index with custom mappings:
curl -k -X PUT 'https://localhost:9200/products' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"name": { "type": "text" },
"price": { "type": "float" },
"category": { "type": "keyword" },
"created_at": { "type": "date" }
}
}
}'Document Operations
Index a document:
curl -k -X POST 'https://localhost:9200/products/_doc' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"name": "RamNode VPS",
"price": 5.00,
"category": "hosting",
"created_at": "2025-01-01"
}'Search documents:
curl -k -X GET 'https://localhost:9200/products/_search' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": { "name": "VPS" }
}
}'Performance Tuning
Index Settings for Write-Heavy Workloads
Optimize for log ingestion or high write throughput:
curl -k -X PUT 'https://localhost:9200/logs/_settings' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"index": {
"refresh_interval": "30s",
"number_of_replicas": 0
}
}'Memory and Cache Settings
For search-heavy workloads, add to elasticsearch.yml:
# Field data cache (for aggregations)
indices.fielddata.cache.size: 20%
# Query cache
indices.queries.cache.size: 10%
# Request circuit breaker
indices.breaker.request.limit: 40%Disk I/O Optimization
RamNode NVMe storage provides excellent I/O, but you can further optimize:
- • Use SSDs/NVMe for data paths (default on RamNode Premium VPS)
- • Avoid swap usage: set
vm.swappiness=1in sysctl.conf - • Schedule force merges during low-traffic periods for read-heavy indices
Monitoring & Maintenance
Health Monitoring Script
Create a simple health check script:
#!/bin/bash
# /opt/scripts/es-health-check.sh
HEALTH=$(curl -sk -u elastic:YOUR_PASSWORD \
https://localhost:9200/_cluster/health | jq -r '.status')
if [ "$HEALTH" != "green" ]; then
echo "WARNING: Elasticsearch cluster status is $HEALTH"
# Add alerting logic here
fiKey Metrics to Monitor
- • Cluster health status (green/yellow/red)
- • JVM heap usage (keep under 75%)
- • Disk usage (alert at 80%, critical at 90%)
- • Search latency and indexing rate
- • CPU utilization during queries
Useful Monitoring Endpoints
# Cluster stats
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_cluster/stats
# Node stats
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_nodes/stats
# Index stats
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_stats
# Pending tasks
curl -k -u elastic:YOUR_PASSWORD https://localhost:9200/_cluster/pending_tasksBackup with Snapshots
Configure a snapshot repository for backups:
# Add to elasticsearch.yml
# path.repo: ["/var/backups/elasticsearch"]
# Create backup directory
mkdir -p /var/backups/elasticsearch
chown elasticsearch:elasticsearch /var/backups/elasticsearch
# Restart Elasticsearch
systemctl restart elasticsearch
# Register repository
curl -k -X PUT 'https://localhost:9200/_snapshot/backup' \
-u elastic:YOUR_PASSWORD \
-H 'Content-Type: application/json' \
-d '{
"type": "fs",
"settings": { "location": "/var/backups/elasticsearch" }
}'
# Create snapshot
curl -k -X PUT 'https://localhost:9200/_snapshot/backup/snapshot_1' \
-u elastic:YOUR_PASSWORDTroubleshooting
Elasticsearch won't start
Check logs for specific errors:
journalctl -u elasticsearch -f
cat /var/log/elasticsearch/ramnode-elasticsearch.logOut of memory errors
Reduce heap size or upgrade VPS RAM:
# Check current memory usage
curl -k -u elastic:YOUR_PASSWORD \
'https://localhost:9200/_nodes/stats/jvm?pretty' | grep heapMax virtual memory too low
Apply the vm.max_map_count setting:
sysctl -w vm.max_map_count=262144
echo 'vm.max_map_count=262144' >> /etc/sysctl.confBootstrap checks failed
Ensure all system limits are set:
# Verify limits
ulimit -n # Should be 65535+
ulimit -l # Should be unlimitedCluster status yellow/red
Check shard allocation:
curl -k -u elastic:YOUR_PASSWORD \
'https://localhost:9200/_cluster/allocation/explain?pretty'Next Steps
- Install Kibana for visualization and management UI
- Set up Logstash or Beats for data ingestion pipelines
- Configure Index Lifecycle Management (ILM) for automatic data retention
- Deploy additional nodes for high availability clustering
- Implement automated snapshot backups to remote storage
Deployment Complete!
Your Elasticsearch instance is now deployed and secured on RamNode. Start indexing data and building powerful search and analytics applications!
