Question
SSH into master node with ssh cluster2-master1. Temporary stop the kube-scheduler, this means in a way you can start it again afterwards.
Create a single pod name manual-schedule of image httpd:2.4-alpine, confirm it's created, but not scheduled on any node.
Now, you're the scheduler and have all its power, manually schedule that pod on a node cluster2-master1. Make sure it's running.
Start the kube-scheduler again and confirm that it's running correctly by creating a pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.
Answer
SSH to master node:
ssh cluster2-master1
Check the pods in kube-system namespace:
kubectl -n kube-system get po
Can be noticed kube-scheduler running as static pod. Move it's manifest to other location:
cd /etc/kubernetes/manifests
mv kube-scheduler.yaml /etc/kubernetes
And check again the pods. Now the kube-scheduler pod shouldn't be running:
kubectl -n kube-system get po
Run first pod:
kubectl run manual-schedule --image=httpd:2.4-alpine
Check the pod status:
kubectl get po manual-schedule
it should be in Pending state, because there is no scheduler to place it on a nodekubectl edit po manual-schedule
And manually schedule the pod by adding nodeName under spec:
...
spec:
nodeName: cluster2-master1
...
Replace the current pod:
kubectl replace --force -f /tmp/kubectl-123456789.yaml
And check:
kubectl get po manual-schedule -o wide
now it should be running on cluster2-master1.mv /etc/kubernetes/kube-scheduler.yaml /etc/kubernetes/manifests
Check the status and wait until it is running:
kubectl -n kube-system get po -w
Run second pod:
kubectl run manual-schedule2 --image=httpd:2.4-alpine
And check:
kubectl get po manual-schedule2 -o wide
it should be running on cluster2-worker1 node