Just putting this out there for those who might need it. This will install Kubernetes from the official repositories, using containerd.io from Fedora’s repositories as the container runtime and Calico as the network fabric. It will also pull in Helm for deploying Helm charts.
Note: This is for development or learning use, and hobbyist depending on your use cases. These commands will get you started. They will require tweaks for security and/or performance if you need Kubernetes for production.
Make sure the system is up-to-date (sudo dnf update -y) before running this. This is best run on a fresh install.
# First, set up the Kubernetes repo and install the packages. Change the
# version to whatever you want to use. Latest is 1.34 as of this writing.
kube_version=1.34
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v$kube_version/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v$kube_version/rpm/repodata/repomd.xml.key
EOF
sudo dnf update
sudo dnf install -y \
kubectl \
kubelet \
kubeadm \
kubernetes-cni \
containerd \
helm
# Make sure swap is off. Only needed if you installed Fedora with a swap
# partition or you're using a swapfile.
sudo swapoff -a
sudo dnf remove -y zram-generator-defaults
# In most other tutorials for setting up Kubernetes, you would be disabling
# the firewall here, but whether that is necessary will depend on your use
# case. For a single-node "cluster", that isn't necessary. And for a
# multi-node cluster, you can can allow through the ports you need rather
# than disabling the firewall entirely.
cat <<EOF | sudo tee /etc/sysctl.d/99-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Turn off SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# Reboot!
After this, reboot to ensure everything takes. Sure, there are other commands that you can run to pull in the changes without rebooting, but rebooting is a near-guaranteed way to ensure everything takes and loads as expected.
Next is setting up the “cluster”:
# Change the CIDR and subnet to whatever you want to use.
kube_subnet="192.168.2.0/16"
# Make sure the CNI configuration in the containerd config is pointing to the
# right path. This prevents the CoreDNS pods from getting stuck in
# "ContainerCreating".
sudo sed -i 's/\/usr\/libexec\/cni/\/opt\/cni\/bin/g' /etc/containerd/config.toml
# Enable and start containerd. Enable but do NOT start kubelet. Kubelet will
# be started as part of kubeadm's initialization.
sudo systemctl enable --now containerd
sudo systemctl enable kubelet
# Pull base images and initialize the control plane
sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=$kube_subnet
# Now to set up kubectl to finish the setup
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Set calico_version to the latest available:
# https://github.com/projectcalico/calico/releases/
calico_version=3.31.0
# Apply the Calico network fabric and "taint" the control plane node.
kubectl apply -f "https://raw.githubusercontent.com/projectcalico/calico/refs/tags/v$calico_version/manifests/calico.yaml"
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
After this, run kubectl get pods --all-namespaces and, eventually, you should see something like this (the “vbox” in the names is because I was running the above on a VirtualBox VM):
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5766bdd7c-b2j58 1/1 Running 0 2m31s
kube-system calico-node-rjpff 1/1 Running 0 2m31s
kube-system coredns-66bc5c9577-5zdl9 1/1 Running 0 2m58s
kube-system coredns-66bc5c9577-pkz4b 1/1 Running 0 2m57s
kube-system etcd-vbox 1/1 Running 0 3m4s
kube-system kube-apiserver-vbox 1/1 Running 0 3m4s
kube-system kube-controller-manager-vbox 1/1 Running 0 3m4s
kube-system kube-proxy-b8mtq 1/1 Running 0 2m58s
kube-system kube-scheduler-vbox 1/1 Running 0 3m4s
And now you have a single-node Kubernetes “cluster” on Fedora 42 against which you can deploy new pods or Helm charts.