-
Bug
-
Resolution: Done
-
Undefined
-
None
-
4.12.z
-
Moderate
-
No
-
False
-
-
Description of problem:
Multiple pods get restarted within first 24 hours post OCP installation: $ grep "^Sep 21 0[4-6]" journalctl_--no-pager |grep -v stabi|egrep 'Killing container with a grace period\" pod=\"openshift-kube-apiserver' |wc -l 19 $ grep "^Sep 21 0[4-6]" journalctl_--no-pager |grep -v stabi|egrep 'Killing container with a grace period\" pod=\"openshift-route-controller-manager/' |wc -l 5 $ grep "^Sep 21 0[4-6]" journalctl_--no-pager |grep -v stabi|egrep 'Killing container with a grace period\" pod=\"openshift-controller-manager/' |wc -l 5 $ grep "^Sep 21 0[4-6]" journalctl_--no-pager |grep -v stabi|egrep 'Killing container with a grace period\" pod=\"openshift-monitoring/prometheus-adapter' |wc -l 2
Looking at https://access.redhat.com/solutions/6961987 it seems only `kube-apiserver` is supposed to do so and we're looking for clarification if those other pods are expected to get restarted as well
Version-Release number of selected component (if applicable):
OCP 4.12.32 OCP 4.12.34
How reproducible:
Install OCP 4.12.32 SNO wait 24 hours and check in journal for "Killing container with a grace period"
Steps to Reproduce:
1. Install OCP 4.12.32 (with disconnected AI) 2. Checking after 24 hours we observe the following pods got restarted - kube-apiserver-master0 - route-controller-manager - controller-manager - prometheus-adapter 3. After 24 hours check journal for "Killing container with a grace period"
Actual results:
We found the following pods restarted : - kube-apiserver-master-xxxx - route-controller-manager-xxxx - controller-manager-xxx - prometheus-adapter-xxxx
Expected results:
Looking for some guidiance if this is expected and why for above pods
Additional info: