Part 4 of 7

    Introduction to Kubernetes Concepts

    Understand what Kubernetes is, when you actually need it, and the core concepts that power container orchestration at scale.

    Pods
    Deployments
    Services

    Docker Compose handles multi-container applications on a single server beautifully. But what happens when one server isn't enough? When you need high availability, automatic failover, or the ability to scale beyond what a single VPS can handle?

    That's where Kubernetes comes in. This guide explains what Kubernetes is, when you actually need it, and breaks down its core concepts so you can decide whether it's right for your use case.

    1

    What is Kubernetes?

    Kubernetes (often abbreviated as K8s) is a container orchestration platform. Where Docker runs containers and Compose coordinates them on one host, Kubernetes coordinates containers across multiple machines, handling deployment, scaling, networking, and self-healing automatically.

    Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has become the industry standard for running containers at scale.

    What Kubernetes provides:

    • Multi-node orchestration: Run containers across a cluster of machines
    • Service discovery: Containers find each other automatically, even as they move between nodes
    • Load balancing: Distribute traffic across container replicas
    • Auto-scaling: Add or remove container instances based on demand
    • Self-healing: Automatically restart failed containers and replace unhealthy nodes
    • Rolling updates: Deploy new versions with zero downtime
    • Storage orchestration: Automatically mount storage systems to containers
    • Secret management: Store and manage sensitive information securely
    2

    When Do You Actually Need Kubernetes?

    Kubernetes is powerful but complex. Before diving in, honestly assess whether you need it.

    You probably DON'T need K8s if:
    • Running a few containers on a single VPS
    • Traffic fits comfortably on one server
    • Brief downtime during updates is acceptable
    • You're a solo developer or small team
    • You want the simplest possible setup
    You might need K8s if:
    • You need high availability (no single point of failure)
    • Running across multiple servers
    • Need automatic scaling based on load
    • Want zero-downtime deployments
    • Deploying many microservices
    • Learning K8s for career development

    The honest middle ground: Many teams adopt Kubernetes because it's trendy, not because they need it. Running K8s adds operational overhead—more things to monitor, secure, and troubleshoot. If a 2GB VPS running Docker Compose handles your workload, adding Kubernetes doesn't make it better; it makes it more complicated.

    3

    Kubernetes Architecture

    Kubernetes runs as a cluster: a set of machines working together. The cluster has two types of nodes.

    Control Plane (Master)

    The control plane manages the cluster. Components include:

    • API Server: The front door to Kubernetes. All commands go through here.
    • etcd: A distributed key-value store holding all cluster state
    • Scheduler: Decides which node should run new containers
    • Controller Manager: Runs background processes that maintain desired state

    Worker Nodes

    Worker nodes run your actual containers. Each worker runs:

    • kubelet: An agent that talks to the control plane and manages containers on that node
    • Container runtime: Usually containerd or Docker, actually runs the containers
    • kube-proxy: Handles networking, routing traffic to the right containers
    Cluster architecture
    ┌─────────────────────────────────────────────────────────────┐
    │                      Control Plane                          │
    │  ┌──────────┐  ┌──────┐  ┌───────────┐  ┌────────────────┐ │
    │  │API Server│  │ etcd │  │ Scheduler │  │Controller Mgr  │ │
    │  └──────────┘  └──────┘  └───────────┘  └────────────────┘ │
    └─────────────────────────────────────────────────────────────┘
                                  │
                  ┌───────────────┼───────────────┐
                  ▼               ▼               ▼
    ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
    │   Worker Node   │ │   Worker Node   │ │   Worker Node   │
    │  ┌───────────┐  │ │  ┌───────────┐  │ │  ┌───────────┐  │
    │  │  kubelet  │  │ │  │  kubelet  │  │ │  │  kubelet  │  │
    │  ├───────────┤  │ │  ├───────────┤  │ │  ├───────────┤  │
    │  │kube-proxy │  │ │  │kube-proxy │  │ │  │kube-proxy │  │
    │  ├───────────┤  │ │  ├───────────┤  │ │  ├───────────┤  │
    │  │ Pods      │  │ │  │ Pods      │  │ │  │ Pods      │  │
    │  └───────────┘  │ │  └───────────┘  │ │  └───────────┘  │
    └─────────────────┘ └─────────────────┘ └─────────────────┘
    4

    Core Kubernetes Concepts

    Kubernetes introduces several abstractions. Understanding these is essential before deploying anything.

    Pods

    A Pod is the smallest deployable unit—a wrapper around one or more containers that share storage and network. Most Pods contain a single container.

    Pod manifest
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-app
    spec:
      containers:
        - name: app
          image: nginx:1.25
          ports:
            - containerPort: 80

    Pods are ephemeral—they can be killed, moved, or recreated at any time. You almost never create Pods directly.

    Deployments

    A Deployment manages a set of identical Pods, ensuring the desired number are always running.

    Deployment manifest
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: app
              image: nginx:1.25
              ports:
                - containerPort: 80
              resources:
                limits:
                  memory: "128Mi"
                  cpu: "250m"

    This creates 3 Pods. If one fails, Kubernetes automatically creates a replacement.

    Services

    A Service provides a stable network endpoint that routes traffic to the right Pods.

    Service manifest
    apiVersion: v1
    kind: Service
    metadata:
      name: my-app-service
    spec:
      selector:
        app: my-app
      ports:
        - port: 80
          targetPort: 80
      type: ClusterIP

    Service types:

    • ClusterIP (default): Only accessible within the cluster
    • NodePort: Exposes on each node's IP at a static port
    • LoadBalancer: Provisions an external load balancer

    ConfigMaps and Secrets

    ConfigMap and Secret
    # ConfigMap for non-sensitive config
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: app-config
    data:
      DATABASE_HOST: "postgres-service"
      LOG_LEVEL: "info"
    ---
    # Secret for sensitive data
    apiVersion: v1
    kind: Secret
    metadata:
      name: app-secrets
    type: Opaque
    data:
      DATABASE_PASSWORD: cGFzc3dvcmQxMjM=  # base64 encoded

    Persistent Volumes

    Pods are ephemeral, but data often isn't. PersistentVolumeClaims decouple storage from Pods.

    PersistentVolumeClaim
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: database-storage
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: standard

    Ingress

    Ingress manages external HTTP/HTTPS traffic with URL-based routing and SSL termination.

    Ingress manifest
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-ingress
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt
    spec:
      tls:
        - hosts:
            - app.example.com
          secretName: app-tls
      rules:
        - host: app.example.com
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: my-app-service
                    port:
                      number: 80
    5

    Comparing Compose and Kubernetes

    AspectDocker ComposeKubernetes
    ScopeSingle hostMulti-host cluster
    Configurationdocker-compose.ymlMultiple YAML manifests
    Learning curveLowSteep
    High availabilityManualBuilt-in
    ScalingManualAutomatic (HPA)
    Rolling updatesBasicSophisticated
    Resource overheadMinimalSignificant
    Best forSmall deploymentsLarge-scale production

    Translating Concepts

    ComposeKubernetes
    Service (in compose)Deployment + Service
    ports:Service + Ingress
    volumes: (named)PersistentVolumeClaim
    environment:ConfigMap + Secret
    restart: alwaysDefault behavior (Deployment)
    6

    Kubernetes Distributions

    "Kubernetes" isn't one thing you install—it's a specification with many implementations.

    For Learning and Development

    minikube: Single-node cluster in a VM. Great for learning.

    minikube
    minikube start
    kubectl get nodes

    kind (Kubernetes in Docker): Runs cluster nodes as Docker containers. Fast to create/destroy.

    kind
    kind create cluster
    kubectl get nodes

    For Production (Self-Hosted)

    k3s: Lightweight distribution perfect for VPS. Uses ~512MB RAM. We'll use this in Part 5.

    k3s
    curl -sfL https://get.k3s.io | sh -
    kubectl get nodes

    Other options:

    • k0s: Single binary, zero dependencies
    • kubeadm: Official bootstrapping tool, more complex but full-featured

    Managed Kubernetes

    Cloud providers handle the control plane:

    • Google Kubernetes Engine (GKE)
    • Amazon Elastic Kubernetes Service (EKS)
    • Azure Kubernetes Service (AKS)
    7

    kubectl: The Kubernetes CLI

    kubectl is how you interact with Kubernetes. It communicates with the API server to manage resources.

    Cluster Info

    Cluster commands
    kubectl cluster-info
    kubectl get nodes

    Working with Resources

    Resource commands
    # List resources
    kubectl get pods
    kubectl get services
    kubectl get deployments
    
    # Detailed info
    kubectl describe pod my-pod
    kubectl describe service my-service
    
    # Create/apply resources
    kubectl apply -f deployment.yaml
    kubectl apply -f .  # Apply all YAML in directory
    
    # Delete resources
    kubectl delete pod my-pod
    kubectl delete -f deployment.yaml

    Debugging

    Debug commands
    # View logs
    kubectl logs my-pod
    kubectl logs -f my-pod  # Follow
    kubectl logs my-pod -c container-name  # Specific container
    
    # Execute commands
    kubectl exec -it my-pod -- /bin/bash
    
    # Port forward for local access
    kubectl port-forward my-pod 8080:80
    kubectl port-forward service/my-service 8080:80

    Scaling

    Scale deployment
    kubectl scale deployment my-app --replicas=5

    Namespaces

    Namespace commands
    # List namespaces
    kubectl get namespaces
    
    # Resources in specific namespace
    kubectl get pods -n production
    
    # Set default namespace
    kubectl config set-context --current --namespace=production
    8

    Resource Requirements

    Kubernetes itself needs resources before your applications even run.

    SetupCPURAMDisk
    Learning (single-node k3s)1 core1GB10GB
    Production (3-node cluster)2 cores each4GB each20GB each

    Realistic Self-Hosted Setup

    RoleInstance SizeCountPurpose
    Control plane2GB RAM1-3Runs k3s server
    Worker4GB+ RAM2+Runs applications

    For a budget-friendly start, a single 4GB VPS running k3s handles both control plane and workloads. Scale out as needed.

    What's Next

    You now understand Kubernetes architecture and core concepts: Pods, Deployments, Services, ConfigMaps, Secrets, Volumes, and Ingress. You know when Kubernetes makes sense and when Docker Compose is the better choice. In Part 5, we'll get hands-on: deploying a lightweight Kubernetes distribution (k3s) on RamNode VPS instances, setting up a functional cluster, configuring Ingress with automatic SSL, and deploying your first real application.