-
Bug
-
Resolution: Unresolved
-
Major
-
4.15.z, 4.17.z, 4.16.z, 4.18.z, 4.19.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
Rejected
-
Horsea OLM Sprint 268, Ivysaur OLM Sprint 269, Jigglypuff OLM Sprint 270, Kabuto Sprint 271
-
4
-
Done
-
Bug Fix
-
-
-
-
-
None
Issue: olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"
1- After upgrading the cluster from 4.14.44 to 4.15.44 the olm-operator pod is always going to CLBO state with below logs.
2025-03-14T06:44:50.225272058Z time="2025-03-14T06:44:50Z" level=info msg="labeller complete" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" index=0 2025-03-14T06:44:50.225553923Z time="2025-03-14T06:44:50Z" level=info msg="labeller complete" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" index=0 2025-03-14T06:44:50.225553923Z time="2025-03-14T06:44:50Z" level=info msg="detected that every object is labelled, exiting to re-start the process..."
2- The namespace events are also not having enough details
$ oc get event | grep -i olm-operator-c49ddd47b-x7dtc 9m Normal Scheduled pod/olm-operator-c49ddd47b-x7dtc Successfully assigned openshift-operator-lifecycle-manager/olm-operator-c49ddd47b-x7dtc to on node.yy.com 9m Normal AddedInterface pod/olm-operator-c49ddd47b-x7dtc Add eth0 [x.x.x.x/23] from openshift-sdn 8m Normal Pulled pod/olm-operator-c49ddd47b-x7dtc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b11bb4b7da75ab1dca818dcb1f59c6e0cef5c8c8f9bea2af4e353942ad91f29" already present on machine 8m Normal Created pod/olm-operator-c49ddd47b-x7dtc Created container olm-operator 8m Normal Started pod/olm-operator-c49ddd47b-x7dtc Started container olm-operator 4m Warning BackOff pod/olm-operator-c49ddd47b-x7dtc Back-off restarting failed container olm-operator in pod olm-operator-c49ddd47b-x7dtc_openshift-operator-lifecycle-manager(1f907dd8-466c-496e-9715-d07e4d762a36) 9m Normal SuccessfulCreate replicaset/olm-operator-c49ddd47b Created pod: olm-operator-c49ddd47b-x7dtc
3- Even after enabling the debug log for olm also, no much information from the olm-pod logs
4- could see a similar bug[a] reported but it was fixed in 4.15 version.
5- Tried to reschedule the pods in different node and same issue.
6- Didnt find any third party agnets running on the node which might contribute to the issue
Need to help to identify and fix the issue.
Must-gather and other logs are available in the support case
04062274
- blocks
-
OCPBUGS-56098 olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"
-
- Closed
-
- clones
-
OCPBUGS-53161 olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"
-
- Verified
-
- depends on
-
OCPBUGS-53161 olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"
-
- Verified
-
- is cloned by
-
OCPBUGS-56098 olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"
-
- Closed
-
- links to
-
RHEA-2024:11038 OpenShift Container Platform 4.19.z bug fix update