Theory is useful, but you learn Kubernetes by running it. This guide walks through deploying a production-capable Kubernetes cluster on RamNode VPS instances using k3s—a lightweight distribution that runs well on modest hardware without sacrificing functionality.
We'll cover single-node setups for getting started, multi-node clusters for high availability, ingress configuration for routing traffic, and automatic SSL certificates with Let's Encrypt.
Why k3s?
k3s is a certified Kubernetes distribution designed for resource-constrained environments. Rancher Labs built it for edge computing, IoT, and situations where you can't dedicate 4GB of RAM just to run the control plane.
The "k3s" name is a play on k8s (Kubernetes)—it's "half the size" (5 letters vs 10).
What makes k3s ideal for VPS deployments:
- Single binary under 100MB
- Control plane runs in ~512MB RAM
- Bundles essential components (containerd, Traefik, CoreDNS, local-path storage)
- Fully compatible with standard Kubernetes APIs
- Easy single-command installation
- Production-ready with proper configuration
Planning Your Cluster
Before provisioning, decide on your architecture.
Single-Node Setup
Best for learning, development, or running lightweight production workloads where high availability isn't critical.
| Component | Specs | Monthly Cost |
|---|---|---|
| Server (control plane + worker) | 2GB RAM, 1 vCPU, 30GB SSD | ~$8 |
Multi-Node High-Availability Setup
For production workloads requiring redundancy.
| Component | Specs | Count | Purpose |
|---|---|---|---|
| Server nodes | 2GB RAM, 1 vCPU | 3 | Control plane (HA) |
| Agent nodes | 4GB RAM, 2 vCPU | 2+ | Run workloads |
Budget-Conscious Production Setup
A practical middle ground for self-hosters: Start with one 4GB node, add agents as you grow.
Prerequisites
For each VPS:
- Ubuntu 22.04 or 24.04 LTS (Debian 11/12 also works)
- SSH access with sudo privileges
- A domain name pointed to your server IPs (for Ingress/SSL)
- Firewall access to required ports
Required Ports
For single-node:
- 80/443: HTTP/HTTPS traffic
- 6443: Kubernetes API (optional)
For multi-node clusters:
- 6443: Kubernetes API
- 2379-2380: etcd (server nodes only)
- 10250: kubelet
- 8472/UDP: Flannel VXLAN
Single-Node Deployment
Let's start with a single-node cluster—the fastest path to a working Kubernetes environment.
Step 1: Prepare the Server
sudo apt update && sudo apt upgrade -y
sudo hostnamectl set-hostname k3s-serverStep 2: Install k3s
The official installation script handles everything:
curl -sfL https://get.k3s.io | sh -That's it. Within a minute, you have a running Kubernetes cluster. Verify:
sudo kubectl get nodesNAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,master 30s v1.28.5+k3s1Step 3: Configure kubectl Access
By default, kubectl requires sudo. To use it as a regular user:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/configNow you can run kubectl without sudo:
kubectl get nodes
kubectl get pods -AMulti-Node Cluster Deployment
For high availability or additional capacity, add more nodes.
┌─────────────────────────────────────────────────────────────┐
│ Load Balancer / DNS │
│ (points to all server IPs) │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Server 1 │◄────►│ Server 2 │◄────►│ Server 3 │
│ k3s server │ │ k3s server │ │ k3s server │
│ etcd │ │ etcd │ │ etcd │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
└─────────────────────┼─────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Agent 1 │ │ Agent 2 │ │ Agent N │
│ k3s agent │ │ k3s agent │ │ k3s agent │
│ workloads │ │ workloads │ │ workloads │
└──────────────┘ └──────────────┘ └──────────────┘Setting Up the First Server Node
# Set hostname
sudo hostnamectl set-hostname k3s-server-1
# Install k3s with cluster-init to enable embedded etcd
curl -sfL https://get.k3s.io | sh -s - server \
--cluster-init \
--tls-san=your-load-balancer-ip-or-domain \
--tls-san=k3s-server-1-public-ipRetrieve the node token (needed to join other nodes):
sudo cat /var/lib/rancher/k3s/server/node-tokenAdding Additional Server Nodes
# Set hostname
sudo hostnamectl set-hostname k3s-server-2 # or k3s-server-3
# Join the cluster as a server
curl -sfL https://get.k3s.io | sh -s - server \
--server https://k3s-server-1-ip:6443 \
--token YOUR_NODE_TOKEN \
--tls-san=your-load-balancer-ip-or-domain \
--tls-san=this-server-public-ipAdding Agent (Worker) Nodes
Agent nodes only run workloads, not the control plane:
# Set hostname
sudo hostnamectl set-hostname k3s-agent-1
# Join as an agent
curl -sfL https://get.k3s.io | sh -s - agent \
--server https://k3s-server-1-ip:6443 \
--token YOUR_NODE_TOKENConfiguring Remote kubectl Access
To manage your cluster from your local machine:
On the Server
sudo cat /etc/rancher/k3s/k3s.yamlOn Your Local Machine
Save the output to ~/.kube/config and replace 127.0.0.1 with your server's public IP:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64-cert>
server: https://YOUR-SERVER-IP:6443 # Change this
name: defaultchmod 600 ~/.kube/config
kubectl get nodesFirewall Considerations
# On the server, allow only your IP
sudo ufw allow from YOUR_LOCAL_IP to any port 6443
# Alternative: SSH tunnel from your local machine
ssh -L 6443:127.0.0.1:6443 user@your-server-ip
# Then kubectl connects to localhost:6443Ingress Controller Setup
k3s includes Traefik as the default ingress controller. It's already running and ready to route traffic.
kubectl get pods -n kube-system | grep traefikBasic Ingress Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 2
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello
image: nginxdemos/hello
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
selector:
app: hello-world
ports:
- port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world
spec:
rules:
- host: hello.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-world
port:
number: 80kubectl apply -f test-app.yaml
# Point hello.yourdomain.com to your server's IP, then visit itUsing Nginx Ingress Instead of Traefik
# During k3s installation
curl -sfL https://get.k3s.io | sh -s - --disable traefik
# Install nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/baremetal/deploy.yaml
# Expose via external IP
kubectl patch svc ingress-nginx-controller -n ingress-nginx \
-p '{"spec": {"type": "LoadBalancer", "externalIPs": ["YOUR-SERVER-IP"]}}'Automatic SSL with cert-manager
cert-manager automates obtaining and renewing TLS certificates from Let's Encrypt.
Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.yaml
# Wait for pods to be ready
kubectl get pods -n cert-manager -wConfigure Let's Encrypt Issuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com # Change this
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- http01:
ingress:
class: traefik # or "nginx" if using nginx-ingresskubectl apply -f letsencrypt-issuer.yamlEnable TLS on Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- hello.yourdomain.com
secretName: hello-world-tls
rules:
- host: hello.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-world
port:
number: 80cert-manager automatically detects the annotation, creates a Certificate resource, completes the HTTP-01 challenge, stores the certificate, and renews before expiration.
kubectl get certificates
kubectl describe certificate hello-world-tlsPersistent Storage
k3s includes local-path-provisioner, which creates storage on the node's filesystem. Good for single-node setups, but data is tied to a specific node.
Using Local-Path Storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5GiData is stored in /var/lib/rancher/k3s/storage/ on the node.
Distributed Storage with Longhorn
For multi-node clusters, Longhorn provides replicated storage across nodes.
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml
# Wait for pods
kubectl get pods -n longhorn-system -w# Remove default from local-path
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
# Set Longhorn as default
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# Access Longhorn UI
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80Deploying a Real Application
Let's deploy WordPress with MySQL—a practical test of Deployments, Services, Ingress, persistent storage, and secrets.
Create Namespace and Secrets
kubectl create namespace wordpress
kubectl create secret generic mysql-secret \
--from-literal=mysql-root-password=your-root-password \
--from-literal=mysql-password=your-wp-password \
-n wordpressDeploy MySQL
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-root-password
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
resources:
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: wordpress
spec:
selector:
app: mysql
ports:
- port: 3306
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5GiDeploy WordPress
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.4-php8.2-apache
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: mysql-password
- name: WORDPRESS_DB_NAME
value: wordpress
ports:
- containerPort: 80
volumeMounts:
- name: wordpress-data
mountPath: /var/www/html
resources:
limits:
memory: "256Mi"
cpu: "250m"
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress-pvc
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
namespace: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-pvc
namespace: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress
namespace: wordpress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- blog.yourdomain.com
secretName: wordpress-tls
rules:
- host: blog.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80Apply and Verify
kubectl apply -f mysql.yaml
kubectl apply -f wordpress.yaml
# Verify
kubectl get pods -n wordpress
kubectl get svc -n wordpress
kubectl get ingress -n wordpress
kubectl get certificate -n wordpress
# Watch logs if needed
kubectl logs -f deployment/wordpress -n wordpressMaintenance Commands
Updating k3s
curl -sfL https://get.k3s.io | sh -Backup etcd (HA clusters)
# On a server node
sudo k3s etcd-snapshot save --name my-backup
# Snapshots stored in /var/lib/rancher/k3s/server/db/snapshots/Draining a Node for Maintenance
# Prevent new pods, evict existing ones
kubectl drain node-name --ignore-daemonsets --delete-emptydir-data
# Perform maintenance...
# Allow scheduling again
kubectl uncordon node-nameRemoving a Node
# From the control plane
kubectl delete node node-name
# On the node itself
sudo /usr/local/bin/k3s-uninstall.sh # server
sudo /usr/local/bin/k3s-agent-uninstall.sh # agentWhat's Next
You now have a functional Kubernetes cluster running on RamNode. You've configured ingress routing, automatic SSL, persistent storage, and deployed a real multi-tier application.
In Part 6, we'll cover production operations: monitoring your cluster with Prometheus and Grafana, setting up alerts, implementing horizontal pod autoscaling, backup strategies, and disaster recovery planning.
This guide is part of the Docker & Kubernetes series:
