• Important
    • None
    • 3
    • MCO Sprint 268, MCO Sprint 269
    • 2
    • Proposed
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Sometimes, when OCL is enabled in a cluster, MCPs report a degraded status because of
      
      
        - lastTransitionTime: "2025-03-20T14:17:28Z"
          message: 'Node ip-10-0-0-92.us-east-2.compute.internal is reporting: "failed to
            update OS to quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3:
            error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3:
            error: Old and new refs are equal: ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3\n:
            exit status 1"'
          reason: 1 nodes are reporting degraded status on sync
          status: "True"
          type: NodeDegraded
      
          

      Version-Release number of selected component (if applicable):

      IPI on AWS
      Version 4.19.0-0.nightly-2025-03-20-062111
          

      How reproducible:

      Interminttent
          

      Steps to Reproduce:

          1. We cannot reproduce it at will. However, repeatedly executing the automated test case ocp-69197 in a cluster with OCL enabled in master and worker pools will eventually trigger the degraded condition
          
          

      Actual results:

      Worker pool is reporting a degraded status
      
        - lastTransitionTime: "2025-03-20T14:17:28Z"
          message: 'Node ip-10-0-0-92.us-east-2.compute.internal is reporting: "failed to
            update OS to quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3:
            error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3:
            error: Old and new refs are equal: ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3\n:
            exit status 1"'
          reason: 1 nodes are reporting degraded status on sync
          status: "True"
          type: NodeDegraded
      
          

      Expected results:

      No degradation should happen when we create/delete machineconfigs in cluster where OCL is enabled
          

      Additional info:

          

            [OCPBUGS-53408] In OCL. error: Old and new refs are equal

            Prachiti Talgulkar added a comment - Pre-merge verified here https://github.com/openshift/machine-config-operator/pull/4924#issuecomment-2782264061

            It happened here too https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.19-amd64-nightly-aws-ipi-longrun-mco-tp-ocl-p3-f7/1899571013784440832 while executing test case "OCP-73631 - Pinned images garbage collection" in a cluster with OCL enabled in master and worker pools

            But we were not able to capture the must gather file. We only have the logs blob:https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/e9fb38c7-8057-4ad7-8a71-8511b73d466c

            Sergio Regidor de la Rosa added a comment - It happened here too https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.19-amd64-nightly-aws-ipi-longrun-mco-tp-ocl-p3-f7/1899571013784440832 while executing test case "OCP-73631 - Pinned images garbage collection" in a cluster with OCL enabled in master and worker pools But we were not able to capture the must gather file. We only have the logs blob: https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/e9fb38c7-8057-4ad7-8a71-8511b73d466c

              zzlotnik@redhat.com Zack Zlotnik
              sregidor@redhat.com Sergio Regidor de la Rosa
              Sergio Regidor de la Rosa Sergio Regidor de la Rosa
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: