-
Bug
-
Resolution: Unresolved
-
Critical
-
4.20.0
-
Quality / Stability / Reliability
-
False
-
-
1
-
Important
-
None
-
None
-
Proposed
-
Metal Platform 271, Metal Platform 272, Metal Platform 273
-
3
-
None
-
None
-
None
-
None
-
None
-
None
-
None
There is an issue in the status logic https://github.com/openshift/cluster-baremetal-operator/blob/2a6cf5336fd7aafa8df59331cfb12c968d104c64/controllers/clusteroperator.go#L242-L260: status ReasonResourceNotFound is not a fatal error, it's a temporary condition while the Metal3 pod does not exist yet. Setting the operator's status as Degraded is wrong since nothing bad is happening. We should set Degraded=False, Progressing=True, Available=False instead.
Spotted in https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_baremetal-operator/418/pull-ci-openshift-baremetal-operator-main-e2e-metal-ipi-serial-ipv4/1927125756081606656, it may be tricky to reproduce manually. Maybe by watching the cluster operators' status and associated error while the installer is still running?
- blocks
-
OCPBUGS-57504 CBO may show as Degraded briefly during Metal3 initialization
-
- Closed
-
- is cloned by
-
OCPBUGS-57504 CBO may show as Degraded briefly during Metal3 initialization
-
- Closed
-
- links to