-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.19
-
Important
-
None
-
3
-
MCO Sprint 268, MCO Sprint 269
-
2
-
Proposed
-
False
-
Description of problem:
Sometimes, when OCL is enabled in a cluster, MCPs report a degraded status because of - lastTransitionTime: "2025-03-20T14:17:28Z" message: 'Node ip-10-0-0-92.us-east-2.compute.internal is reporting: "failed to update OS to quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3: error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3: error: Old and new refs are equal: ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3\n: exit status 1"' reason: 1 nodes are reporting degraded status on sync status: "True" type: NodeDegraded
Version-Release number of selected component (if applicable):
IPI on AWS Version 4.19.0-0.nightly-2025-03-20-062111
How reproducible:
Interminttent
Steps to Reproduce:
1. We cannot reproduce it at will. However, repeatedly executing the automated test case ocp-69197 in a cluster with OCL enabled in master and worker pools will eventually trigger the degraded condition
Actual results:
Worker pool is reporting a degraded status - lastTransitionTime: "2025-03-20T14:17:28Z" message: 'Node ip-10-0-0-92.us-east-2.compute.internal is reporting: "failed to update OS to quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3: error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3: error: Old and new refs are equal: ostree-unverified-registry:quay.io/mcoqe/layering@sha256:a14d1d859ce8a3a0478be6821cfd1aeafabbf3707f7fd7511cda8e46338206b3\n: exit status 1"' reason: 1 nodes are reporting degraded status on sync status: "True" type: NodeDegraded
Expected results:
No degradation should happen when we create/delete machineconfigs in cluster where OCL is enabled
Additional info:
Pre-merge verified here https://github.com/openshift/machine-config-operator/pull/4924#issuecomment-2782264061