Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-19778

duplicated running pods for certified-operators/community-operators/redhat-marketplace/redhat-operators in the project openshift-marketplace

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Major Major
    • None
    • 4.10.z
    • OLM
    • Critical
    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Customer Escalated

      Description of problem:

      Able to see duplicated pods and in crashloopbackoff in the openshift-marketplace

      $ oc get pods -n openshift-marketplace  NAME                                                              READY   STATUS             RESTARTS   AGE
      5799beea233b33b6f3b550ce9d4386be1ce0eaf18538c6e51a4da28265vhkst   0/1     Completed          0          1d
      certified-operators-flklx                                         0/1     CrashLoopBackOff   8          34m
      certified-operators-zlpxd                                         0/1     CrashLoopBackOff   8          34m
      community-operators-cqbjn                                         0/1     CrashLoopBackOff   8          33m
      community-operators-s7gxf                                         0/1     CrashLoopBackOff   8          33m
      marketplace-operator-5fd8df5986-8tszf                             1/1     Running            0          33m
      redhat-marketplace-2fmk5                                          0/1     CrashLoopBackOff   6          23m
      redhat-marketplace-8xkbg                                          0/1     CrashLoopBackOff   8          33m
      redhat-operators-9lddb                                            0/1     CrashLoopBackOff   8          33m
      redhat-operators-blbcf                                            0/1     CrashLoopBackOff   8          34m 

      Also, the `catalog-operator` pod is restarting after giving an stacktrace with the error concurrent map writes

      $ oc get pods -n  openshift-operator-lifecycle-manager 
      NAME                                    READY   STATUS      RESTARTS   AGE
      catalog-operator-f85c49c5-m6b4d         1/1     Running     277        11d
       
      $ oc logs catalog-operator-f85c49c5-m6b4d -p -n  openshift-operator-lifecycle-manager 
      ...
      2023-09-25T07:25:36.269842917Z time="2023-09-25T07:25:36Z" level=info msg=syncing event=update reconciling="*v1alpha1.Subscription" selflink=
      2023-09-25T07:25:36.270101523Z fatal error: concurrent map writes
      2023-09-25T07:25:36.273353697Z 
      2023-09-25T07:25:36.273353697Z goroutine 638 [running]:
      2023-09-25T07:25:36.273353697Z runtime.throw({2023-09-25T07:25:36.273407198Z 0x1ee9827, 0xc003a645a8})
      2023-09-25T07:25:36.273407198Z  /usr/lib/golang/src/runtime/panic.go:1198 +0x71 fp=0xc003a64550 sp=0xc003a64520 pc=0x43bcd1
      2023-09-25T07:25:36.273407198Z runtime.mapassign_faststr(0x1be0e40, 0xc006651d40, {0x1f17e43, 0x2e2023-09-25T07:25:36.

      Version-Release number of selected component (if applicable):

      $ oc get clusterversion
      NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
      version   4.10.41   True        False         11d     Cluster version is 4.10.41
      

      How reproducible:

      No able to reproduce

      Steps to Reproduce:

      N/A
      

      Actual results:

      Two pods for the certified-operators/community-operators/redhat-marketplace/redhat-operators in the project openshift-marketplace exists and also the catalog-operator pod is giving an stacktrace

      Expected results:

      Only a pod exists for the  certified-operators/community-operators/redhat-marketplace/redhat-operators and the catalog-operator doesn't give an stacktrace

      Additional info:

      Even when this is an OCP 4.10 out of Maintenance support, this bug is filled before starting the upgrade activity, then, the idea is to be able to move further to be able to upgrade to OCP 4.11.

            agreene1991 Alexander Greene
            rhn-support-ocasalsa Oscar Casal Sanchez
            Jian Zhang Jian Zhang
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: