Kubernetes Installation on Debian 13 (VM)
Welcome to this hands-on guide for installing Kubernetes on Debian 13! If you’re new to Kubernetes or just want to set up a development cluster to experiment with, you’re in the right place. This tutorial will walk you through every step with clear explanations, so you’ll understand not just what you’re doing, but why you’re doing it.
What You’ll Build
By the end of this guide, you’ll have a fully functional single-node Kubernetes cluster running on your Debian 13 virtual machine. This setup is perfect for learning, testing applications, or developing containerized workloads locally. While production clusters typically have multiple nodes (separate control plane and worker machines), a single-node setup is ideal for getting started.
Prerequisites
Before we begin, make sure you have:
- A fresh Debian 13 VM with at least 2 CPU cores and 2GB of RAM (more is better)
- Root or sudo access to run administrative commands
- A stable internet connection to download packages and container images
- Basic familiarity with the Linux command line (don’t worry, we’ll explain everything)
- Disabled swap memory (Kubernetes requires this - you can disable it with
sudo swapoff -aand make it permanent by commenting out swap entries in/etc/fstab)
Time commitment: Expect this installation to take about 30-45 minutes, depending on your internet speed and familiarity with Linux.
What is Kubernetes, Briefly?
Kubernetes (often shortened to “K8s”) is an orchestration platform that automates the deployment, scaling, and management of containerized applications. Think of it as a smart system that manages your containers across one or many machines, ensuring they stay running, can communicate with each other, and scale up or down as needed.
Now, let’s get started!
Step 1: Prepare Kernel Modules and Network Settings
Before installing Kubernetes, we need to configure the Linux kernel to support the networking features that Kubernetes relies on. This involves loading specific kernel modules and adjusting network settings.
What are kernel modules? They’re pieces of code that extend the Linux kernel’s functionality without requiring a reboot. We need two specific modules:
# Load br_netfilter module (needed so iptables can see bridged traffic)
sudo modprobe br_netfilter
# Load overlay module (needed for containerd to run containers)
sudo modprobe overlay
# Verify they're loaded
lsmod | grep br_netfilter
lsmod | grep overlay
Why do we need these?
br_netfilterallows iptables rules to work on bridge networks, which Kubernetes uses for pod-to-pod communicationoverlayis a storage driver that containerd uses to efficiently layer container filesystems
Make the changes persistent so they load automatically on every reboot:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Now configure networking parameters so Kubernetes can route packets correctly between containers:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Apply changes immediately without reboot
sudo sysctl --system
What do these settings do?
net.bridge.bridge-nf-call-iptables = 1: Enables iptables to process traffic that passes through network bridgesnet.bridge.bridge-nf-call-ip6tables = 1: Same as above, but for IPv6 trafficnet.ipv4.ip_forward = 1: Allows your system to forward network packets, essential for routing traffic between pods
Verify that IP forwarding is enabled (you should see 1 in the output):
sysctl net.ipv4.ip_forward
Step 2: Install Container Runtime (containerd)
Kubernetes doesn’t run containers directly. Instead, it relies on a container runtime to handle the low-level operations. We’ll use containerd, which is lightweight, widely supported, and maintained by the Cloud Native Computing Foundation (the same organization behind Kubernetes).
Why containerd? It’s the industry standard, has excellent performance, and is simpler than alternatives like Docker (which actually uses containerd under the hood anyway).
Download the containerd archive:
# Download containerd archive
wget https://github.com/containerd/containerd/releases/download/v2.1.4/containerd-2.1.4-linux-amd64.tar.gz
# Verify checksum (important for security - ensures the file wasn't tampered with)
echo "316d510a0428276d931023f72c09fdff1a6ba81d6cc36f31805fea6a3c88f515 containerd-2.1.4-linux-amd64.tar.gz" | sha256sum --check
# Extract it into /usr/local (where system-wide binaries are stored)
sudo tar Cxzvf /usr/local containerd-2.1.4-linux-amd64.tar.gz
If the checksum verification says “OK”, you’re good to proceed. This step ensures you downloaded the genuine, unmodified containerd release.
Now install the systemd service file so containerd runs automatically on startup:
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
The enable --now flag both enables the service to start on boot and starts it immediately.
Step 3: Install runc
runc is a low-level container runtime that actually spawns and runs containers according to the OCI (Open Container Initiative) specification. Containerd uses runc behind the scenes to do the heavy lifting of creating container processes.
Think of it this way: containerd is the manager, and runc is the worker that creates the actual container environments.
wget https://github.com/opencontainers/runc/releases/download/v1.3.1/runc.amd64
# Verify checksum for security
echo "53bfce31ca047e537e0767b21c9d529d4b5b3e1cb9c590ca81654f9a5615d80d runc.amd64" | sha256sum --check
# Install it as an executable in the system path
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
The install command copies the file to /usr/local/sbin/ and sets the correct permissions (755 means the owner can read/write/execute, others can only read/execute).
Step 4: Install CNI Plugins
CNI stands for Container Network Interface, and these plugins handle the networking between your pods. Without CNI plugins, containers wouldn’t be able to communicate with each other or the outside world.
What will these do? They’ll set up virtual network interfaces, assign IP addresses to pods, and configure routing rules so traffic flows correctly between containers.
wget https://github.com/containernetworking/plugins/releases/download/v1.8.0/cni-plugins-linux-amd64-v1.8.0.tgz
# Verify checksum
echo "ab3bda535f9d90766cccc90d3dddb5482003dd744d7f22bcf98186bf8eea8be6 cni-plugins-linux-amd64-v1.8.0.tgz" | sha256sum --check
# Extract into /opt/cni/bin (the standard location Kubernetes looks for CNI plugins)
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.8.0.tgz
Step 5: Configure containerd
Now that containerd is installed, we need to configure it to work properly with Kubernetes. The default configuration needs a few tweaks.
First, check that containerd is running:
sudo systemctl status containerd
Generate a default configuration file:
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Now we’ll edit this configuration file to make it Kubernetes-ready:
sudo nano /etc/containerd/config.toml
First modification: Find the section that looks like [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] (it might be slightly different depending on the version). Under this section, add:
SystemdCgroup = true
Why? This tells containerd to use systemd for managing cgroups (control groups), which is how Linux limits and isolates resource usage for processes. Kubernetes expects this to be enabled.
Second modification: Find the section [plugins."io.containerd.grpc.v1.cri"] and add:
sandbox_image = "registry.k8s.io/pause:3.10"
Why? The “pause” container is a special infrastructure container that holds the network namespace for a pod. We’re explicitly specifying which version to use to ensure compatibility.
Save the file (in nano, press Ctrl+X, then Y, then Enter) and restart containerd:
sudo systemctl restart containerd
sudo systemctl status containerd
Make sure the status shows “active (running)” with no errors.
Step 6: Install Kubernetes Components
Now for the main event! We’ll install three essential Kubernetes command-line tools:
- kubelet: The agent that runs on each node and manages containers
- kubeadm: A tool for bootstrapping Kubernetes clusters
- kubectl: The command-line interface for interacting with your cluster
First, install some prerequisites:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Add the official Kubernetes repository. This ensures you get the latest stable versions directly from the Kubernetes project:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now install the Kubernetes tools:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# Mark these packages as "held" so they don't get automatically upgraded
# (you want to upgrade Kubernetes deliberately, not accidentally)
sudo apt-mark hold kubelet kubeadm kubectl
# Enable and start the kubelet service
sudo systemctl enable --now kubelet
Note: The kubelet will crash-loop until we initialize the cluster in the next step. This is expected behavior, so don’t worry if you see errors in systemctl status kubelet.
Step 7: Initialize the Cluster
This is where the magic happens! We’ll use kubeadm to initialize the Kubernetes control plane. The control plane is the brain of your cluster, responsible for making decisions about scheduling, detecting changes, and responding to events.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
What’s that pod-network-cidr flag? It defines the IP address range that will be used for pods in your cluster. We’re using 10.244.0.0/16 because it’s the default range that Flannel (the network plugin we’ll install later) expects. This gives us over 65,000 IP addresses for our pods.
This command will take a few minutes. It’s downloading container images, generating certificates, and setting up all the control plane components. When it completes, you’ll see output with instructions and a kubeadm join command (save this if you ever want to add worker nodes).
Now configure kubectl so your regular user can interact with the cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
What did we just do? We copied the cluster admin credentials to your home directory and set the correct permissions. This allows you to run kubectl commands without sudo.
Step 8: Install Pod Network (Flannel)
Right now your cluster exists, but pods can’t communicate with each other yet. We need to install a CNI network plugin to enable pod-to-pod networking. We’ll use Flannel, which is simple, reliable, and perfect for learning environments.
What does Flannel do? It creates an overlay network that spans all nodes in your cluster, assigning each node a subnet and handling the routing of packets between pods, even across different nodes.
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the Flannel configuration and applies it to your cluster. You’ll see several resources being created: a namespace, service accounts, roles, and DaemonSets (which ensure Flannel runs on every node).
Step 9: Allow Scheduling on Control Plane Node
By default, Kubernetes won’t schedule regular workloads on control plane nodes (they’re reserved for system components). Since we’re running a single-node cluster, we need to remove this restriction.
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
What’s a taint? It’s a Kubernetes mechanism that repels pods from nodes unless those pods have a matching “toleration”. By removing the control-plane taint, we’re saying “it’s okay to run regular workloads here too.”
Step 10: Verify the Cluster
Let’s make sure everything is working! Run these commands:
kubectl get nodes
You should see your node listed with the status “Ready”. If it says “NotReady”, wait a minute or two for the network plugin to fully initialize.
kubectl get pods -A
This shows all pods in all namespaces (-A is short for --all-namespaces). You should see pods for:
- kube-system namespace: Core Kubernetes components like kube-apiserver, kube-scheduler, kube-controller-manager, etcd, and CoreDNS
- kube-flannel namespace: The Flannel network plugin
All pods should show “Running” status. If some are still in “Pending” or “ContainerCreating”, give them a minute to download images and start.
Congratulations! 🎉 You now have a fully functional Kubernetes cluster running on your Debian 13 VM!
What’s Next?
Now that you have a working cluster, here are some ideas for next steps:
- Deploy your first application: Try running a simple web server with
kubectl create deployment nginx --image=nginx - Learn about pods and services: Explore how Kubernetes exposes applications with
kubectl expose - Experiment with scaling: Try
kubectl scale deployment nginx --replicas=3 - Explore the Kubernetes dashboard: Install the web UI to visualize your cluster
- Read the official documentation: Visit kubernetes.io for comprehensive guides
Remember, this single-node setup is for development and learning. When you’re ready for production, you’ll want to set up a multi-node cluster with separate control plane and worker nodes, implement proper security measures, and consider managed Kubernetes services like GKE, EKS, or AKS.
Happy learning, and welcome to the world of Kubernetes! 🚀