-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.21.0
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
Node Green Sprint 280
-
1
-
contract-priority
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
This is a follow-up to https://issues.redhat.com/browse/OCPBUGS-59403
kubelet garbage collection causes out of order high perf hook runs leading to wrong IRQ SMP affinity
It has to do with kubelet garbage collection.
https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection
https://github.com/openshift/kubernetes/blob/e2e5d628ce43695982002737a4d1f1c6eb30eb11/pkg/kubelet/kubelet.go#L794
klet.containerDeletor = newPodContainerDeletor(klet.containerRuntime, max(containerGCPolicy.MaxPerPodContainer, minDeadContainerInPod))
with:
minDeadContainerInPod = 1
The high performance hooks get called once on shutdown, but the container _0 isn't garbage collected. And then when _1 dies and _2 starts, the kubelet garbage collects the _0 container, and the crio hooks are run again
The problem is that this yields the following order:
i) add _0
ii) stop _0
add _1
ii) stop _1
add _2
garbage collect -> remove _0
Because of this, the high perf hook is run, and the IRQ SMP affinity bits are enabled even though a static container is running on those CPUs.