-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
4.20
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Once kube-scheduler hits a static pod lifecycle failure, it will never become un-degraded by itself:
Last Transition Time: 2025-07-25T13:22:36Z Message: GuardControllerDegraded: Missing operand on node master-1 MissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: "openshift-kube-scheduler" in namespace: "openshift-kube-scheduler" for revision: 6 on node: "master-1" didn't show up, waited: 3m0s Reason: GuardController_SyncError::MissingStaticPodController_SyncError Status: True Type: Degraded Last Transition Time: 2025-07-25T13:16:24Z Message: NodeInstallerProgressing: 2 nodes are at revision 6 Reason: AsExpected Status: False Type: Progressing Last Transition Time: 2025-07-24T16:28:53Z Message: StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 6 Reason: AsExpected Status: True Type: Available
- is cloned by
-
OCPBUGS-59838 kube-controller-manager operator will forever stay degraded if GuardControllerDegraded on timeout
-
- New
-