Details
-
Bug
-
Resolution: Duplicate
-
Undefined
-
4.11.z
-
Moderate
-
No
-
False
-
Description
Description of problem:
After a fresh install of a ROSA cluster, the openshift-kube-scheduler-guard pod for master node ip-10-0-216-251.ca-central-1.compute.internal is in a 0/1 state and the associated kube-scheduler pod is in a non-functioning state. Deleting the kube-scheduler on that node did not result in the scheduler being replaced (nor did moving the static manifest yaml out, and back in, to /etc/kubernetes/manifests) The only solution to bring the node back into a functioning state with kube-scheduler was to reboot (although restarting kubelet likely would've had the same effect, we believe)
Version-Release number of selected component (if applicable):
4.11.38
How reproducible:
Has been seen on at least one prior occasion on a ROSA cluster.