-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
4.20
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Once kube-controller-manager hits a static pod lifecycle failure, it will never become un-degraded by itself:
Status:
Conditions:
Last Transition Time: 2025-07-25T13:22:36Z
Message: GuardControllerDegraded: Missing operand on node master-1
MissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: "kube-controller-manager" in namespace: "openshift-kube-controller-manager" for revision: 6 on node: "master-1" didn't show up, waited: 3m0s
Reason: GuardController_SyncError::MissingStaticPodController_SyncError
Status: True
Type: Degraded
Last Transition Time: 2025-07-25T13:16:34Z
Message: NodeInstallerProgressing: 2 nodes are at revision 6
Reason: AsExpected
Status: False
Type: Progressing
Last Transition Time: 2025-07-24T16:31:14Z
Message: StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 6
Reason: AsExpected
Status: True
Type: Available
Last Transition Time: 2025-07-24T16:25:28Z
Message: All is well
Reason: AsExpected
Status: True
Type: Upgradeable
Last Transition Time: 2025-07-24T16:25:28Z
Reason: NoData
Status: Unknown
Type: EvaluationConditionsDetected
- clones
-
OCPBUGS-59837 kube-scheduler operator will forever stay degraded if GuardControllerDegraded on timeout
-
- New
-