-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
4.13.z, 4.12.z, 4.11.z
-
Moderate
-
No
-
False
-
Description of problem:
When we set schedulerpolicy to HighNodeUtilization, multiple pods of the same application are scheduled on the same node even though there is affinity set for the pods to make sure that 2 pods are not running on the same node
Version-Release number of selected component (if applicable):
All OCP version where SchedulerPolicy can be used
How reproducible:
Sharing the steps below
Steps to Reproduce:
1. install ES and CLO with node count for ES set to 3 2. Once installed check the output of 3 CDM pods for ES all will be running on the different node 3. Now set the Scheduler profile as per the document(https://docs.openshift.com/container-platform/4.13/nodes/scheduling/nodes-scheduler-profiles.html) to HighNodeUtilization 4. Now Scale down all ES pods and scale them back after all the pods are deleted => Scale down $ for pod in `oc get deployment.apps -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do oc scale deployment.apps/$pod --replicas=0; done => Scale up $ for pod in `oc get deployment.apps -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do oc scale deployment.apps/$pod --replicas=1; done 5. Check the nodes where ES pods are scheduled 2 pods will be running on same node. 6. Delete any 1 ES CDM pod that will allow the newly created pod to be scheduled on a node where there is no ES pods already scheduled
Actual results:
2ES pods are on same node
Expected results:
2 ES pods should not be running on same node
Additional info:
I have already reproduce this inlab and will be happy if needed a demo