-
Bug
-
Resolution: Done-Errata
-
Critical
-
4.14.0
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
No
-
None
-
None
-
Hypershift Sprint 254
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Hypershift Operator pods are running with higher PriorityClass but external-dns is set to default class with lower preemption priority, this has made the pod to preempt during migration.
Observed while performance testing dynamic serving spec migration on MC.
# oc get pods -n hypershift
NAME READY STATUS RESTARTS AGE
external-dns-7f95b5cdc-9hnjs 0/1 Pending 0 23m
operator-956bdb486-djjvb 1/1 Running 0 116m
operator-956bdb486-ppgzt 1/1 Running 0 115m
external-dns pod.spec
preemptionPolicy: PreemptLowerPriority
priority: 0
priorityClassName: default
operator pods.spec
preemptionPolicy: PreemptLowerPriority
priority: 100003000
priorityClassName: hypershift-operator
Version-Release number of selected component (if applicable):
On Management Cluster 4.14.7
How reproducible:
Always
Steps to Reproduce:
1. Setup a MC with request serving and autoscaling machinesets
2. Load up the MC to its max capacity
3. Watch external-dns pod gets preempted when resources needed by other pods
Actual results:
External-dns pod goes to pending state until new node comes up
Expected results:
Since this is also a critical pod like hypershift operator, as it would affect HC dns configuration, this one needs to be a higher priority pod as well.
Additional info:
stage: perf3 sector
- links to
-
RHEA-2024:3718
OpenShift Container Platform 4.17.z bug fix update