Setup Kubernetes cluster with kubeadm and CRI-O on Debian

Install Debian on 2 machines: one master and one worker, do not setup swap space or disable it in /etc/fstab.
Next steps should be done on BOTH hosts.
Enable kernel modules:

modprobe overlay
modprobe br_netfilter
Make it permanent - run as root:
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Next, create the sysctl parameters:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
Apply the new sysctl configuration without reboot:
sysctl --system
Install the basic packages:
apt install gnupg2 apt-transport-https curl
Create a new environment variable for the CRI-O installation:
export OS=Debian_11
export VERSION=1.24
Add the CRI-O repository:
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
Add the GPG key for the CRI-O repository:
mkdir -p /usr/share/keyrings
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg
Next, update system repositories and refresh the package index:
apt update
Install CRI-O:
apt install cri-o cri-o-runc cri-tools -y
Adit the CRI-O configuration /etc/crio/crio.conf and on the [crio.network] section, uncomment the option network_dir and the plugin_dir:
[crio.network]

# The default CNI network name to be selected. If not set or "", then
# CRI-O will pick-up the first one found in network_dir.
# cni_default_network = ""

# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"

# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
        "/opt/cni/bin/",
]
Next, edit the CRI-O bridge configuration /etc/cni/net.d/100-crio-bridge.conf and change the default subnet of IP address using your custom subnet, in my case it is 10.42.0.0/24:
        "ranges": [
            [{ "subnet": "10.42.0.0/24" }],
            [{ "subnet": "1100:200::/24" }]
        ]
Restart and enable the CRI-O service:
systemctl restart crio
systemctl enable crio
systemctl status crio
Add the Kubernetes repository and GPG key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update and refresh repository and package index:
apt update
Install Kubernetes packages:
apt install kubelet kubeadm kubectl
Pin current version of these new installed packages:
apt-mark hold kubelet kubeadm kubectl
Start controlplane installation - run on master only. Check again if kernel module is enabled:
lsmod | grep br_netfilter
Pull the images:
kubeadm config images pull
List images:
crictl images
Run the kubeadm init command to initialize the Kubernetes cluster (on master only):
kubeadm init --pod-network-cidr=10.42.0.0/24 \
--apiserver-advertise-address=192.168.122.176 \
--cri-socket=unix:///var/run/crio/crio.sock
After init is done, set up the Kubernetes credentials - you can take the commands from the previous output, also note down join command too:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the Kubernetes cluster information
kubectl cluster-info
Set up the Calico CNI plugin - be sure to grab the correct version:
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
Edit manifest calico.yaml and uncomment the configuration CALICO_IPV4POOL_CIDR and change the network subnet to your configuration, mine is 10.42.0.0/24:
            - name: CALICO_IPV4POOL_CIDR
              value: "10.42.0.0/24"
Save and deploy the Calico CNI plugin:
kubectl apply -f calico.yaml
Add worker node to cluster, take the above join command. Mine command was:
kubeadm join 192.168.122.176:6443 --token ut2s5w.12zu5xhkenv8934h \
	--discovery-token-ca-cert-hash sha256:999d0819410b9946640a5ec1f752e291e25039237b311bf9f22dd89940df205c
Go back to master node and verify:
kubectl get nodes -o wide
Test the configuration - deploy an nginx from master:
kubectl create deployment nginx --image=nginx:alpine --replicas=2
kubectl create service nodeport nginx --tcp=80:80
Do a:
kubectl get svc
You should see the nginx service type NodePort exposed the port 80 an random port on Kubernetes hosts. Mine was 32754:
nginx        NodePort    10.110.199.45   <none>        80:32754/TCP   13s
curl to worker ip and service port to access your nginx deployment:
curl 192.168.122.121:32754
The nginx welcome page should be in return.