Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-12236

Catalogs rebuilt by oc-mirror are in crashloop : cache is invalid

XMLWordPrintable

    • Critical
    • No
    • Proposed
    • False
    • Hide

      None

      Show
      None
    • NA
    • Done

      Description of problem:

      Since https://github.com/openshift/operator-framework-olm/pull/435 merged, public catalogs have --cache-dir and precomputed caches.
      
      oc-mirror 4.14 and 4.13 writes catalog with --cache-dir in CMD
      oc-mirror catalogs crashloop complaining that the cache is invalid

      Version-Release number of selected component (if applicable):

      4.14
      4.13

      How reproducible:

      always

      Steps to Reproduce:

      1.Download oc-mirror 4.13 or 4.14
      2.Mirror registry.redhat.io/redhat/redhat-operator-registry:4.12 with oc-mirror to a registry
      3.Try to install an operator on a disconnected cluster from that mirrored registry
      

      Actual results:

      [root@localhost 2633]# oc get pod
      NAME                                    READY   STATUS             RESTARTS     AGE
      marketplace-operator-7b6c78cdd6-wsrk4   1/1     Running            0            38m
      qe-app-registry-gbpcv                   1/1     Running            0            37m
      redhat-operator-index-t7svb             0/1     CrashLoopBackOff   2 (6s ago)   33s
      oc logs -f po/redhat-operator-index-t7svb
      time="2023-04-20T05:24:15Z" level=fatal msg="cache requires rebuild: cache reports digest as \"287527d3b9abae65\", but computed digest is \"b0cabdf1401063a2\""

      Expected results:

      I can install and upgrade operators from a mirrored catalog

      Additional info:

      https://github.com/openshift/operator-framework-olm/pull/435
      
      

            luzuccar@redhat.com Luigi Mario Zuccarelli
            skhoury@redhat.com Sherine Khoury
            ying zhou ying zhou
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: