Setup a Kubernetes cluster using kubeadm, Vagrant, Centos 7 and libvirt (KVM)

Create a working directories for Vagrant machines:

mkdir machines
cd machines/
mkdir master worker{1,2,3}
Add latest Centos 7 image:
vagrant box add centos/7
Create Vagrantfiles:
cd master/
cat master/Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.hostname = "master.example.com"
  config.vm.provider "libvirt" do |lv|
     lv.memory = "2048"
     lv.cpus = 2
  end
  config.vm.provision "shell", inline: <<-SHELL
     yum install -y vim bash-completion git
   SHELL
end
cat ../worker1/Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.hostname = "worker1.example.com"
  config.vm.provider "libvirt" do |lv|
     lv.memory = "1024"
  end
  config.vm.provision "shell", inline: <<-SHELL
     yum install -y vim bash-completion git
   SHELL
end
...
Power up the machines (for each of those) - in separate console tabs:
vagrant up --provider=libvirt
vagrant ssh
On all instances
Install Docker runtime prerequisites:
yum install -y vim yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker runtime and configure it:
yum install -y docker-ce
[ ! -d /etc/docker ] && mkdir /etc/docker
Change Docker storage driver:
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
Startup configuration:
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Disable firewall:
systemctl disable --now firewalld
Add kubernetes tools repository:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Disable SELinux:
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Install kubeadm with tools:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
Set iptables bridging:
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Disable swap. Comment the line in /etc/fstab:
#/swapfile none swap defaults 0 0
Get the IP address of all machines and add entries to /etc/hosts:
ip a # to get the IPs
cat /etc/hosts
192.168.121.65 master.example.com
192.168.121.45 worker1.example.com
192.168.121.46 worker2.example.com
192.168.121.47 worker3.example.com
Reboot Vagrant machine:
reboot
On master issue as root user:
kubeadm init
and note the following output lines:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
...
kubeadm join 192.168.121.65:6443 --token o634ic.evnh5g70kag5j8h1 \
    --discovery-token-ca-cert-hash sha256:78a97bb327b67c6be786af2d070db04710a97d365dce91d6eb0b55b77fb5cad0
Configure kubectl at master. Switch to vagrant user and issue first 3 saved commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check cluster status:
kubectl cluster-info
You should get the following output:
Kubernetes control plane is running at https://192.168.121.65:6443
KubeDNS is running at https://192.168.121.65:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
One more check:
kubectl get nodes
Output:
NAME                 STATUS     ROLES                  AGE     VERSION
master.example.com   NotReady   control-plane,master   2m47s   v1.20.2
Install weave network plugin - go to the homepage and you'll find the following command for installation:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Check again master node:
kubectl get nodes
You should get the Ready state for master node:
NAME                 STATUS   ROLES                  AGE     VERSION
master.example.com   Ready    control-plane,master   5m49s   v1.20.2
On workers - join the cluster by issuing saved command, run as root:
kubeadm join 192.168.121.65:6443 --token o634ic.evnh5g70kag5j8h1 \
--discovery-token-ca-cert-hash sha256:78a97bb327b67c6be786af2d070db04710a97d365dce91d6eb0b55b77fb5cad0
In no more than 2 minutes, worker should join the cluster. Check it by command at master:
kubectl get nodes
NAME                   STATUS   ROLES                  AGE   VERSION
worker1.example.com    Ready                     78s   v1.20.2
master.example.com     Ready    control-plane,master    8m   v1.20.2
Repeat for other workers. So in 10 minutes you'll have your kubeadm Vagrant cluster ready to go.