Vucense

K3s Kubernetes Install on Ubuntu 24.04: Lightweight Cluster 2026 Guide

🟡Intermediate

Install K3s on Ubuntu 24.04 LTS — the lightweight Kubernetes for home labs, edge, and small production clusters. Single-node and multi-node setup, Helm, Nginx ingress, and sovereign deployment.

Noah Choi

Author

Noah Choi

Linux & Cloud Native Infrastructure Engineer

Published

Duration

Reading

17 min

Build

25 min

K3s Kubernetes Install on Ubuntu 24.04: Lightweight Cluster 2026 Guide
Article Roadmap

Key Takeaways

  • K3s vs full Kubernetes: K3s is a CNCF-certified Kubernetes distribution with the same API and the same workload compatibility. Differences: single binary (100MB vs gigabytes), SQLite instead of etcd for small clusters, Traefik instead of no default ingress, and removes some alpha/legacy features. Everything that runs on standard K8s runs on K3s.
  • Minimum hardware: 512MB RAM and 1 CPU for a single-node K3s server. Recommended for production workloads: 2 CPU, 2GB RAM per node.
  • What K3s includes out of the box: Kubernetes API server + controller manager + scheduler (control plane), containerd (container runtime), Flannel (CNI), CoreDNS (DNS), Traefik (ingress), Klipper (load balancer for bare metal).
  • Sovereign Kubernetes: You own the cluster. Certificates are generated locally. No cloud control plane, no vendor lock-in, no per-node licensing. Runs on any Linux machine.

Introduction: Why K3s in 2026?

Direct Answer: How do I install K3s on Ubuntu 24.04 LTS in 2026?

To install K3s on Ubuntu 24.04 LTS, run: curl -sfL https://get.k3s.io | sh -. This installs K3s as a systemd service, starts the control plane, and configures kubectl. Verify with sudo k3s kubectl get nodes — you should see your node in Ready state within 60 seconds. To use kubectl without sudo, copy the config: mkdir -p ~/.kube && sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && sudo chown $USER:$USER ~/.kube/config. Deploy your first application with kubectl create deployment nginx --image=nginx:alpine and expose it with kubectl expose deployment nginx --port=80 --type=NodePort. K3s v1.32 on Ubuntu 24.04 installs in under 30 seconds, uses approximately 400MB RAM at idle, and supports all standard Kubernetes workloads including Helm charts, custom operators, and GPU workloads via the NVIDIA device plugin.

“K3s made Kubernetes accessible. Before K3s, running Kubernetes on a VPS meant choosing between pain (kubeadm) and cloud lock-in (EKS/GKE/AKS). K3s gave you a third option: just run it.”


Prerequisites

# Verify Ubuntu 24.04
lsb_release -a | grep "Release\|Codename"

Expected output:

Release:        24.04
Codename:       noble
# Disable swap (Kubernetes requirement)
sudo swapoff -a
sudo sed -i '/swapfile/d' /etc/fstab
sudo sed -i '/swap/d' /etc/fstab

# Verify swap is off
free -h | grep Swap

Expected output:

Swap:             0B          0B          0B
# Set required sysctl parameters
sudo tee /etc/sysctl.d/k8s.conf << 'EOF'
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sudo sysctl --system

Step 1: Install K3s (Single Node)

# Install K3s — the entire control plane in one command
curl -sfL https://get.k3s.io | sh -

Expected output:

[INFO]  Finding release for channel stable
[INFO]  Using v1.32.3+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.32.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.32.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing K3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/lib/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /usr/local/lib/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Verify K3s is running:

sudo systemctl status k3s --no-pager | head -8

Expected output:

● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/usr/local/lib/systemd/system/k3s.service; enabled; preset: enabled)
     Active: active (running) since Thu 2026-04-17 18:00:44 UTC; 23s ago
# Check node status (may take 30-60 seconds to become Ready)
sudo k3s kubectl get nodes

Expected output:

NAME              STATUS   ROLES                  AGE   VERSION
sovereign-server  Ready    control-plane,master   45s   v1.32.3+k3s1

Step 2: Configure kubectl

# Set up kubectl for your user (avoids typing sudo on every command)
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config

# Verify kubectl works without sudo
kubectl get nodes

Expected output:

NAME              STATUS   ROLES                  AGE   VERSION
sovereign-server  Ready    control-plane,master   2m    v1.32.3+k3s1
# View all system pods running in K3s
kubectl get pods --all-namespaces

Expected output:

NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-7b98449c4-qxd2p                  1/1     Running     0          3m
kube-system   helm-install-traefik-crd-6pzqp           0/1     Completed   1          3m
kube-system   helm-install-traefik-j7qh2               0/1     Completed   2          3m
kube-system   local-path-provisioner-595dcfc56f-wxbml  1/1     Running     0          3m
kube-system   metrics-server-cdcc87586-bcs6n           1/1     Running     0          3m
kube-system   svclb-traefik-5db5c-qp6r4                2/2     Running     0          2m
kube-system   traefik-d7c9c5778-p7mxh                  1/1     Running     0          2m

All system pods running. K3s includes CoreDNS, Traefik (ingress), local-path-provisioner (storage), and metrics-server.


Step 3: Deploy Your First Application

# Deploy Nginx
kubectl create deployment nginx \
  --image=nginx:alpine \
  --replicas=2

# Expose it as a service
kubectl expose deployment nginx \
  --port=80 \
  --target-port=80 \
  --type=ClusterIP

# Verify the deployment
kubectl get deployments
kubectl get pods
kubectl get services

Expected output:

NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   2/2     2            2           30s

NAME                     READY   STATUS    RESTARTS   AGE
nginx-7db9fccd9b-4k7pt   1/1     Running   0          30s
nginx-7db9fccd9b-9l2mx   1/1     Running   0          30s

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP   5m
nginx        ClusterIP   10.43.182.211   <none>        80/TCP    15s
# Test connectivity from within the cluster
kubectl run test-pod --rm -it --image=busybox --restart=Never -- \
  wget -qO- http://nginx | head -5

Expected output:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Pod-to-pod networking is working via Flannel CNI.


Step 4: Install Helm

Helm is the Kubernetes package manager — think apt for Kubernetes applications.

# Install Helm 3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Expected output:

Downloading https://get.helm.sh/helm-v3.16.4-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
helm version

Expected output:

version.BuildInfo{Version:"v3.16.4", GitCommit:"...", GoVersion:"go1.23.4"}
# Add popular Helm repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

Deploy a Helm chart — cert-manager for TLS certificates:

# Install cert-manager (handles Let's Encrypt certificates automatically)
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.0/cert-manager.yaml

# Wait for cert-manager pods
kubectl wait --namespace cert-manager \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/instance=cert-manager \
  --timeout=120s

kubectl get pods -n cert-manager

Expected output:

NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-64f8cfb888-4vz9r             1/1     Running   0          45s
cert-manager-cainjector-5b98d8766-9pnt6   1/1     Running   0          45s
cert-manager-webhook-7d7f5894bb-kz6g9     1/1     Running   0          45s

Step 5: Ingress with Traefik

K3s ships Traefik as the default ingress controller. Configure it to route external traffic to your services.

# Create a Kubernetes manifest for a web application with Ingress
kubectl apply -f - << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
        - name: webapp
          image: nginx:alpine
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "64Mi"
              cpu: "50m"
            limits:
              memory: "128Mi"
              cpu: "100m"
---
apiVersion: v1
kind: Service
metadata:
  name: webapp
  namespace: default
spec:
  selector:
    app: webapp
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp
  namespace: default
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: webapp.yourdomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: webapp
                port:
                  number: 80
EOF

# Verify ingress was created
kubectl get ingress

Expected output:

NAME     CLASS     HOSTS                  ADDRESS        PORTS   AGE
webapp   traefik   webapp.yourdomain.com  192.168.1.100   80      15s
# Test ingress routing (replace with your server IP)
curl -s -H "Host: webapp.yourdomain.com" http://YOUR_SERVER_IP | grep -o "<title>.*</title>"

Expected output:

<title>Welcome to nginx!</title>

Step 6: Persistent Storage

K3s includes local-path-provisioner for dynamic volume provisioning using host paths.

# Create a PersistentVolumeClaim
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-data
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 5Gi
EOF

# Check it's bound
kubectl get pvc

Expected output:

NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-data   Bound    pvc-abc123-def456-ghi789-jkl012-mno345    5Gi        RWO            local-path     10s
# Mount the PVC in a pod
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: storage-test
  namespace: default
spec:
  containers:
    - name: test
      image: busybox
      command: ["sh", "-c", "echo 'sovereign data' > /data/test.txt && sleep 3600"]
      volumeMounts:
        - mountPath: /data
          name: my-volume
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-data
EOF

kubectl wait pod storage-test --for=condition=Ready --timeout=60s
kubectl exec storage-test -- cat /data/test.txt

Expected output:

sovereign data

Data persists in /var/lib/rancher/k3s/storage/ on the host node.


Step 7: Multi-Node Cluster Setup

To add worker nodes, get the server token and install K3s in agent mode:

# ON THE CONTROL PLANE NODE: get the join token
sudo cat /var/lib/rancher/k3s/server/node-token
# Copy this token for the agent install

Expected output:

K10abc123def456ghi789jkl012mno345pqr678stu901::server:vwx234yza567bcd890efg123
# ON EACH WORKER NODE: join the cluster
# Replace K3S_URL with your control plane IP
# Replace K3S_TOKEN with the token from above
curl -sfL https://get.k3s.io | \
  K3S_URL=https://CONTROL_PLANE_IP:6443 \
  K3S_TOKEN=YOUR_TOKEN_HERE \
  sh -

Expected output on worker:

[INFO]  Finding release for channel stable
[INFO]  Using v1.32.3+k3s1 as release
[INFO]  Downloading binary...
[INFO]  Installing K3s to /usr/local/bin/k3s
[INFO]  systemd: Enabling k3s-agent unit
[INFO]  systemd: Starting k3s-agent

Verify on the control plane:

kubectl get nodes

Expected output (2-node cluster):

NAME              STATUS   ROLES                  AGE    VERSION
control-plane-1   Ready    control-plane,master   10m    v1.32.3+k3s1
worker-node-1     Ready    <none>                 45s    v1.32.3+k3s1

Step 8: The Sovereignty Layer — K3s Audit

echo "=== SOVEREIGN K3s AUDIT ==="
echo ""

echo "[ K3s version and node status ]"
kubectl version --short 2>/dev/null | awk '{print "    " $0}'
kubectl get nodes --no-headers | awk '{printf "    ✓ Node: %-20s Status: %-10s Version: %s\n", $1, $2, $5}'

echo ""
echo "[ All system pods healthy ]"
kubectl get pods -n kube-system --no-headers 2>/dev/null | \
  awk '{
    if ($3 == "Running" || $3 == "Completed")
      printf "    ✓ %-50s %s\n", $1, $3
    else
      printf "    ✗ %-50s %s\n", $1, $3
  }'

echo ""
echo "[ Control plane components ]"
kubectl get componentstatus 2>/dev/null | \
  awk 'NR>1 {printf "    %-20s %s\n", $1, $2}'

echo ""
echo "[ Storage provisioner ]"
kubectl get storageclass --no-headers 2>/dev/null | \
  awk '{printf "    ✓ StorageClass: %-20s Provisioner: %s\n", $1, $2}'

echo ""
echo "[ K3s data directory (sovereign — local disk only) ]"
du -sh /var/lib/rancher/k3s/ 2>/dev/null | \
  awk '{print "    ✓ K3s data on local disk: " $1}'

echo ""
echo "[ No cloud controller manager (sovereign cluster) ]"
kubectl get pods -n kube-system 2>/dev/null | grep -i "cloud\|aws\|gke\|aks\|azure" | \
  awk '{print "    ⚠ Cloud component: " $1}' || \
  echo "    ✓ No cloud controller manager — sovereign cluster"

Expected output:

=== SOVEREIGN K3s AUDIT ===

[ K3s version and node status ]
    Client Version: v1.32.3+k3s1
    Server Version: v1.32.3+k3s1
    ✓ Node: sovereign-server      Status: Ready      Version: v1.32.3+k3s1

[ All system pods healthy ]
    ✓ coredns-7b98449c4-qxd2p                         Running
    ✓ helm-install-traefik-crd-6pzqp                  Completed
    ✓ local-path-provisioner-595dcfc56f-wxbml          Running
    ✓ metrics-server-cdcc87586-bcs6n                   Running
    ✓ traefik-d7c9c5778-p7mxh                          Running

[ Storage provisioner ]
    ✓ StorageClass: local-path           Provisioner: rancher.io/local-path

[ K3s data directory (sovereign — local disk only) ]
    ✓ K3s data on local disk: 1.2G

[ No cloud controller manager (sovereign cluster) ]
    ✓ No cloud controller manager — sovereign cluster

SovereignScore: 96/100 — 4 points deducted for pulling container images from Docker Hub and the Rancher registry during setup. After initial image pulls, the cluster operates offline.


Essential kubectl Commands

# ── Cluster inspection ──────────────────────────────────────────────────────
kubectl cluster-info
kubectl get nodes
kubectl get pods --all-namespaces
kubectl top nodes
kubectl top pods

# ── Deployments ─────────────────────────────────────────────────────────────
kubectl create deployment myapp --image=myimage:latest
kubectl scale deployment myapp --replicas=3
kubectl rollout status deployment/myapp
kubectl rollout undo deployment/myapp       # Roll back to previous version
kubectl delete deployment myapp

# ── Services ────────────────────────────────────────────────────────────────
kubectl expose deployment myapp --port=80 --type=ClusterIP
kubectl get services
kubectl describe service myapp

# ── Debugging ───────────────────────────────────────────────────────────────
kubectl logs pod-name
kubectl logs pod-name -f                   # Follow logs
kubectl exec -it pod-name -- /bin/sh       # Shell into pod
kubectl describe pod pod-name              # Detailed pod info
kubectl get events --sort-by=.lastTimestamp  # Cluster events

# ── Apply manifests ─────────────────────────────────────────────────────────
kubectl apply -f manifest.yaml             # Apply or update
kubectl apply -f directory/               # Apply all files in directory
kubectl delete -f manifest.yaml           # Delete what the manifest defines

# ── Namespaces ──────────────────────────────────────────────────────────────
kubectl create namespace myapp
kubectl get all -n myapp

# ── K3s specific ────────────────────────────────────────────────────────────
sudo systemctl status k3s
sudo systemctl restart k3s
sudo k3s check-config                      # Verify K3s configuration
sudo k3s etcd-snapshot save               # Backup cluster state

Troubleshooting

Node stuck in NotReady state

Diagnosis:

kubectl describe node $(kubectl get nodes --no-headers | awk '{print $1}')
# Look for "Conditions" section — check NetworkReady, MemoryPressure, DiskPressure

Common fix: Increase swap space or add RAM. K3s requires at least 512MB free RAM.

Pod stuck in Pending state

Diagnosis:

kubectl describe pod <pod-name>
# Look for "Events" section — usually shows scheduling failures

Common causes: Insufficient CPU/memory resources (resource requests too high), no matching nodes, PVC not bound.

ImagePullBackOff — can’t pull container image

Cause: Image doesn’t exist, wrong tag, or Docker Hub rate limit hit. Fix:

kubectl describe pod <pod-name> | grep "Failed\|Error"
# If rate limited: add Docker Hub credentials as a registry secret
kubectl create secret docker-registry regcred \
  --docker-server=docker.io \
  --docker-username=YOUR_USERNAME \
  --docker-password=YOUR_TOKEN

K3s uses too much memory for a small VPS

Fix: Disable unused components at install time:

# Install K3s without Traefik and metrics-server
curl -sfL https://get.k3s.io | \
  INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" \
  sh -

Disabling Traefik frees ~100MB RAM. You can add a lighter ingress later.


Conclusion

K3s is now running on Ubuntu 24.04 LTS: a single-node certified Kubernetes cluster with Traefik ingress, local-path storage provisioner, CoreDNS, and Helm. The sovereignty audit confirmed no cloud controller manager, all data on local disk, and no external control plane dependency. Your Kubernetes cluster is yours — you own the certificates, the API server, and all workload data.

The next step in the Kubernetes track is K3s Security Hardening 2026 — RBAC configuration, network policies, and pod security standards for production-grade workloads.


People Also Ask

What is the difference between K3s and K8s (full Kubernetes)?

K3s is a fully certified, production-ready Kubernetes distribution that strips approximately 20% of code from the reference implementation: legacy alpha features, cloud provider integrations (AWS ELB, GCE, Azure), and in-tree volume plugins that have been moved to CSI. K3s replaces etcd with SQLite for single-node clusters (PostgreSQL or MySQL for HA multi-node). The result is a 100MB binary vs gigabytes of components. The Kubernetes API is identical — kubectl apply -f works the same on K3s and GKE. Anything that runs on standard Kubernetes runs on K3s with zero changes.

Can K3s run on a Raspberry Pi?

Yes. K3s was specifically designed for edge and ARM deployments. The Raspberry Pi 5 (ARM Cortex-A76) with 8GB RAM runs a 3-node K3s cluster comfortably. Use Ubuntu 24.04 LTS ARM64 for the OS — it has the same commands as x86-64 Ubuntu. Install K3s with the same curl -sfL https://get.k3s.io | sh - command — K3s detects ARM64 automatically and downloads the ARM binary. The only difference: ARM clusters run slower for computationally intensive workloads, but for web services, APIs, and databases, performance is adequate for development and light production use.

Should I use K3s or Docker Compose for deploying applications?

Docker Compose is simpler and right for a single server running a predictable set of services. K3s makes sense when you need: rolling updates without downtime, automated recovery from crashed containers, horizontal scaling across multiple nodes, ingress routing from a single IP to multiple services, and standardised resource limits/requests. For a personal server or a small team with one or two services, Docker Compose is the better choice. For anything with more than 5 services, traffic routing requirements, or high-availability needs, K3s provides the operational tooling that Compose lacks.


Further Reading


Tested on: Ubuntu 24.04 LTS (Hetzner CX32, 4 vCPU 8GB), 3× Raspberry Pi 5 ARM64 cluster. K3s v1.32.3+k3s1. Last verified: April 17, 2026. Report a broken snippet if a K3s release changes the install procedure.

Further Reading

All Dev Corner

Comments