-
Bug
-
Resolution: Unresolved
-
Normal
-
4.13, 4.12, 4.14
Description of problem:
migrator pod in `openshift-kube-storage-version-migrator` project stuck in Pending state
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1. Add a default cluster-wide node selector with a label that do not match with any node label: $ oc edit scheduler cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: node-role.kubernetes.io/role=app mastersSchedulable: false 2. Delete the migrator pod running in the `openshift-kube-storage-version-migrator` $ oc delete pod migrator-6b78665974-zqd47 -n openshift-kube-storage-version-migrator 3. Check if the migrator pod comes up in running state or not. $ oc get pods -n openshift-kube-storage-version-migrator NAME READY STATUS RESTARTS AGE migrator-6b78665974-j4jwp 0/1 Pending 0 2m41s
Actual results:
The pod goes into the pending state because it tries to get scheduled on the node having label `node-role.kubernetes.io/role=app`.
Expected results:
The pod should come up in running state, it should not get affected by the cluster-wide node-selector.
Additional info:
Setting the annotation `openshift.io/node-selector=` into the `openshift-kube-storage-version-migrator` project and then deleting the pending migrator pod helps in bringing the pod up.
The expectation with this bug is that the project `openshift-kube-storage-version-migrator` should have the annotation `openshift.io/node-selector=`, so that the pod running on this project will not get affected by the wrong cluster-wide node-selector configuration.