-
Story
-
Resolution: Unresolved
-
Normal
-
None
-
openshift-4.13
-
Upstream
-
13
-
False
-
None
-
False
-
OCPSTRAT-46 - Strategic Upstream Work - OCP Control Plane and Node Lifecycle Group
-
-
-
Workloads Sprint 234, Workloads Sprint 235, Workloads Sprint 236, Workloads Sprint 237, Workloads Sprint 238, Workloads Sprint 239, Workloads Sprint 240, Workloads Sprint 241, Workloads Sprint 248, Workloads Sprint 249, Workloads Sprint 250, Workloads Sprint 251, Workloads Sprint 252, Workloads Sprint 254, Workloads Sprint 255, Workloads Sprint 256, Workloads Sprint 257, Workloads Sprint 258, Workloads Sprint 261, Workloads Sprint 262
In some cases people are surprised that their deployment can momentarily have more pods during a rollout than described ( replicas - maxUnavailable < availableReplicas < replicas + maxSurge). The culprit are Terminating pods that can run in addition to the Running + Starting pods.
Even though Terminating pods are not considered part of a deployment this can cause problems with resource usage and scheduling as described in https://github.com/kubernetes/kubernetes/issues/107920
There is a KEP that tries to solve this issue for Deployments/ReplicaSets and also for a similar use case in Jobs: https://github.com/kubernetes/enhancements/pull/3940 . Also similar feature gap is in daemonsets. We should help out with pushing this feature forward.
This has a customer impact and we are tracking this as an RFE https://issues.redhat.com/browse/RFE-2328
- blocks
-
RFE-2328 FailedScheduling event in specific environments with two replicas
- Accepted
- links to