Kubernetes CKA sample exam question 5 with answer

Question
Upgrade the cluster (Master and Worker node) from 1.18.0 to 1.19.0
Make sure to first drain both Nodes and make them available after upgrade.

Answer
Get the nodes and their version:

kubectl get nodes
from the output we saw that there is a one controlplane Master node and node01 - worker node, both at the version 1.18.0
Let's begin with controlplane. We will drain it:
kubectl drain controlplane --ignore-daemonsets
After this, check again the status:
kubectl get nodes
make sure that controlplane have SchedulingDisabled in the STATUS field
Check there the existing pods are scheduled:
kubectl get po -o wide
we observed that all pods are scheduled on node01 so we can proceed with the next steps.
On controlplane node update system package cache:
apt update
Install the kubeadm 1.19.0-00:
apt install kubeadm=1.19.0-00
Perform kubeadm upgrade on controlplane - it will take some time:
kubeadm upgrade apply v1.19.0
Install the kubelet 1.19.0-00 and restart the service after that:
apt install kubelet=1.19.0-00
systemctl restart kubelet
Get the nodes and their version:
kubectl get nodes
you can see the newest version for controlplane.
Now - uncordon the controlplane, to make it ready for work:
kubectl uncordon controlplane
kubectl get nodes
Now, the controlplane is ready
Next, proceed to node01 - drain it:
kubectl drain node01 --ignore-daemonsets
Check there the existing pods are scheduled:
kubectl get po -o wide
we observed that all pods are scheduled on controlplane so we can proceed with the next steps.
SSH to node01 and perform package cache update:
ssh node01
apt update
On the node01 install the kubeadm 1.19.0-00:
apt install kubeadm=1.19.0-00
Perform node upgrade with kubeadm:
kubeadm upgrade node
Still on node01 - install the kubelet 1.19.0-00 and restart the service after that:
apt install kubelet=1.19.0-00
systemctl restart kubelet
Exit node01 and verify the cluster status:
kubectl get nodes
you can see the newest version for node01.
Uncordon the node to make workloads possible and check the cluster status again:
kubectl uncordon node01
kubectl get nodes
After this, cluster upgrade is finished.