-
Bug
-
Resolution: Cannot Reproduce
-
Normal
-
None
-
4.18.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
During an investigate of the latest round of unexpected node not ready failures on metal upgrade jobs, I found two jobs that have similar issues. These are new failures to me. It seems that both are failing during the process of a gracefulshutdown of the kube-apiserver.
Drilling into job run 1 (2 has similar problem):
I see that the UnexpectedNodeNotReady happens during a graceful shutdown of the apiserver.
There are interesting events in etcd, kubeapiserver pods being killed at this time.
Could apiserver look deeper into these issues?