Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60914

Multiple UpdateService instances can race for graph-data-tag-digest Pod ownership

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Low
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      While reconciling a UpdateService, the OSUS Operator creates a graph-data-tag-digest Pod to resolve a potentially tag pullspec to a digest one. But there seems to be no mechanism in place to allow two UpdateService instances to exist at the same time, so one of the instances may end up using incorrect data

      Version-Release number of selected component (if applicable):

      5.0.3

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create two UpdateService with different .spec.graphDataImage images
      2. Inspect the updateservice.operator.openshift.io/graph-data-image annotation on on the Deployment instances:

      oc get deployment another-sample -o yaml | yq '.spec.template.metadata.annotations["updateservice.operator.openshift.io/graph-data-image"]'
      

      Actual results:

      ❯ oc get updateservice sample -o yaml | yq .spec.graphDataImage
      quay.io/petr-muller/cincinnati-graph-data-container:master
      ❯ oc get updateservice another-sample -o yaml | yq .spec.graphDataImage
      quay.io/petr-muller/cincinnati-graph-data-container:20250826
      

      => different graph data images

      ❯ skopeo inspect docker://quay.io/petr-muller/cincinnati-graph-data-container:master | jq -r '.Digest'
      sha256:e7b568bf2521815e7be5b0684a86df01e269828b90fe7d9517ee0b51663a9bf3
      ❯ skopeo inspect docker://quay.io/petr-muller/cincinnati-graph-data-container:20250826 | jq -r '.Digest'
      sha256:1e72623f18747ed3bcf07558ba4ac88372231ab36126126e99718c25b9d4a55e
      

      => with different digests

      ❯ oc get deployment sample -o yaml | yq '.spec.template.metadata.annotations["updateservice.operator.openshift.io/graph-data-image"]'
      quay.io/petr-muller/cincinnati-graph-data-container@sha256:e7b568bf2521815e7be5b0684a86df01e269828b90fe7d9517ee0b51663a9bf3
      ❯ ota-stage get deployment another-sample -o yaml | yq '.spec.template.metadata.annotations["updateservice.operator.openshift.io/graph-data-image"]'
      quay.io/petr-muller/cincinnati-graph-data-container@sha256:e7b568bf2521815e7be5b0684a86df01e269828b90fe7d9517ee0b51663a9bf3
      

      => identical annotation (meaning it is wrong on the another-sample one, leading to missed graph data updates if quay.io/petr-muller/cincinnati-graph-data-container:20250826 is tagged to a different image

      Expected results:

      There are two separate Pods, and the annotations on the deployments match the digests of the .spec.graphDataImage

      Additional info:

      Seems like rare scenario so low severity

              Unassigned Unassigned
              afri@afri.cz Petr Muller
              None
              None
              Jia Liu Jia Liu
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: