Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-56034

olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • Rejected
    • Horsea OLM Sprint 268, Ivysaur OLM Sprint 269, Jigglypuff OLM Sprint 270, Kabuto Sprint 271
    • 4
    • Done
    • Bug Fix
    • Hide
      * Before this release, if an Operator did not have the required `olm.managed=true` label, the Operator might fail and enter a `CrashLoopBackOff` state. When this happened, the logs did not report the status as an error. As a result, the failure was difficult to diagnose. With this update, this type of failure is reported as an error. (link:https://issues.redhat.com/browse/OCPBUGS-56034[OCPBUGS-56034]
      Show
      * Before this release, if an Operator did not have the required `olm.managed=true` label, the Operator might fail and enter a `CrashLoopBackOff` state. When this happened, the logs did not report the status as an error. As a result, the failure was difficult to diagnose. With this update, this type of failure is reported as an error. (link: https://issues.redhat.com/browse/OCPBUGS-56034 [ OCPBUGS-56034 ]
    • None

      Issue: olm-operator pod going to CLBO with a message "detected that every object is labelled, exiting to re-start the process"

      1- After upgrading the cluster from 4.14.44 to 4.15.44 the olm-operator pod is always going to CLBO state with below logs.

       

      2025-03-14T06:44:50.225272058Z time="2025-03-14T06:44:50Z" level=info msg="labeller complete" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" index=0
      2025-03-14T06:44:50.225553923Z time="2025-03-14T06:44:50Z" level=info msg="labeller complete" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" index=0
      2025-03-14T06:44:50.225553923Z time="2025-03-14T06:44:50Z" level=info msg="detected that every object is labelled, exiting to re-start the process..." 

       

      2- The namespace events are also not having enough details

       

      $ oc get event | grep -i olm-operator-c49ddd47b-x7dtc 
      9m          Normal    Scheduled           pod/olm-operator-c49ddd47b-x7dtc         Successfully assigned openshift-operator-lifecycle-manager/olm-operator-c49ddd47b-x7dtc to on node.yy.com
      9m          Normal    AddedInterface      pod/olm-operator-c49ddd47b-x7dtc         Add eth0 [x.x.x.x/23] from openshift-sdn
      8m          Normal    Pulled              pod/olm-operator-c49ddd47b-x7dtc         Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b11bb4b7da75ab1dca818dcb1f59c6e0cef5c8c8f9bea2af4e353942ad91f29" already present on machine
      8m          Normal    Created             pod/olm-operator-c49ddd47b-x7dtc         Created container olm-operator
      8m          Normal    Started             pod/olm-operator-c49ddd47b-x7dtc         Started container olm-operator
      4m          Warning   BackOff             pod/olm-operator-c49ddd47b-x7dtc         Back-off restarting failed container olm-operator in pod olm-operator-c49ddd47b-x7dtc_openshift-operator-lifecycle-manager(1f907dd8-466c-496e-9715-d07e4d762a36)
      9m          Normal    SuccessfulCreate    replicaset/olm-operator-c49ddd47b        Created pod: olm-operator-c49ddd47b-x7dtc 

       

      3- Even after enabling the debug log for olm also, no much information from the olm-pod logs
      4- could see a similar bug[a] reported but it was fixed in 4.15 version.

      5- Tried to reschedule the pods in different node and same issue.

      6- Didnt find any third party agnets running on the node which might contribute to the issue

      Need to help to identify and fix the issue.

      Must-gather and other logs are available in the support case 

      04062274

       [a] https://issues.redhat.com/browse/OCPBUGS-25802

              rh-ee-cmacedo Camila Macedo
              rhn-support-amuhamme MUHAMMED ASLAM V K
              None
              None
              Jian Zhang Jian Zhang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: