Uploaded image for project: 'OpenShift Over the Air'
  1. OpenShift Over the Air
  2. OTA-1601

status cmd: improve line breaks handling in the "Message" column of the "Updating Cluster Operators" section

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • None
    • oc adm upgrade
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Moderate
    • None
    • None
    • OTA 274, OTA 275
    • None
    • None
    • None

      Found the issue https://redhat-internal.slack.com/archives/CJ1J9C3V4/p1755122638838849?thread_ts=1755112023.153649&cid=CJ1J9C3V4

       

      The example of CI job:

      https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/30105/pull-ci-openshift-origin-main-e2e-aws-ovn-upgrade/1955686702345359360

       

      Example of the output

      https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/30105/pull-ci-openshift-origin-main-e2e-aws-ovn-upgrade/1955686702345359360/artifacts/e2e-aws-ovn-upgrade/openshift-e2e-test/artifacts/junit/adm-upgrade-status/adm-upgrade-status-2025-08-13%2019:58:23.124650721%20+0000%20UTC%20m=+1919.964244606__20250813-192642.txt

      Result:

      curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/30105/pull-ci-openshift-origin-main-e2e-aws-ovn-upgrade/1955686702345359360/artifacts/e2e-aws-ovn-upgrade/openshift-e2e-test/artifacts/junit/adm-upgrade-status/adm-upgrade-status-2025-08-13%2019:58:23.124650721%20+0000%20UTC%20m\=+1919.964244606__20250813-192642.txt
      Unable to fetch alerts, ignoring alerts in 'Update Health':  failed to get alerts from Thanos: no token is currently in use for this session
      = Control Plane =
      Assessment:      Progressing
      Target Version:  4.20.0-0.ci-2025-08-13-182454-test-ci-op-5wilvz46-latest (from 4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial)
      Updating:        image-registry, monitoring, openshift-controller-manager
      Completion:      50% (17 operators updated, 3 updating, 14 waiting)
      Duration:        24m (Est. Time Remaining: 45m)
      Operator Health: 34 HealthyUpdating Cluster Operators
      NAME             SINCE   REASON                                            MESSAGE
      image-registry   6s      DeploymentNotCompleted::NodeCADaemonUnavailable   NodeCADaemonProgressing: The daemon set node-ca is deploying node pods
      Progressing: The deployment has not completed
      monitoring                     4s    RollOutInProgress                                                                Rolling out the stack.
      openshift-controller-manager   11s   RouteControllerManager_DesiredStateNotYetAchieved::_DesiredStateNotYetAchieved   Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11
      Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3
      RouteControllerManagerProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8
      RouteControllerManagerProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3Control Plane Nodes
      NAME                          ASSESSMENT   PHASE     VERSION                                                     EST   MESSAGE
      ip-10-0-10-232.ec2.internal   Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?
      ip-10-0-8-129.ec2.internal    Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?
      ip-10-0-88-44.ec2.internal    Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?= Worker Upgrade =WORKER POOL   ASSESSMENT   COMPLETION   STATUS
      worker        Pending      0% (0/3)     3 Available, 0 Progressing, 0 DrainingWorker Pool Nodes: worker
      NAME                          ASSESSMENT   PHASE     VERSION                                                     EST   MESSAGE
      ip-10-0-47-75.ec2.internal    Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?
      ip-10-0-57-235.ec2.internal   Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?
      ip-10-0-64-121.ec2.internal   Outdated     Pending   4.20.0-0.ci-2025-08-13-174821-test-ci-op-5wilvz46-initial   ?= Update Health =
      SINCE    LEVEL   IMPACT   MESSAGE
      24m12s   Info    None     Update is proceeding well% 

      Expecting:

      There should not be any line breaks that destroy the layout of the table.

      Possible cause: The CO status.condistions.message contains line breaks.

      curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/30105/pull-ci-openshift-origin-main-e2e-aws-ovn-upgrade/1955686702345359360/artifacts/e2e-aws-ovn-upgrade/gather-extra/artifacts/inspect/cluster-scoped-resources/config.openshift.io/clusteroperators/image-registry.yaml
      ---
      apiVersion: config.openshift.io/v1
      kind: ClusterOperator
      metadata:
        annotations:
          capability.openshift.io/name: ImageRegistry
          include.release.openshift.io/hypershift: "true"
          include.release.openshift.io/ibm-cloud-managed: "true"
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
        creationTimestamp: "2025-08-13T18:52:02Z"
        generation: 1
        managedFields:
        - apiVersion: config.openshift.io/v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                .: {}
                f:capability.openshift.io/name: {}
                f:include.release.openshift.io/hypershift: {}
                f:include.release.openshift.io/ibm-cloud-managed: {}
                f:include.release.openshift.io/self-managed-high-availability: {}
                f:include.release.openshift.io/single-node-developer: {}
              f:ownerReferences:
                .: {}
                k:{"uid":"afc2ae89-a344-4fa6-92a3-294edb109067"}: {}
            f:spec: {}
          manager: cluster-version-operator
          operation: Update
          time: "2025-08-13T18:52:02Z"
        - apiVersion: config.openshift.io/v1
          fieldsType: FieldsV1
          fieldsV1:
            f:status:
              .: {}
              f:extension: {}
          manager: cluster-version-operator
          operation: Update
          subresource: status
          time: "2025-08-13T18:52:03Z"
        - apiVersion: config.openshift.io/v1
          fieldsType: FieldsV1
          fieldsV1:
            f:status:
              f:conditions:
                .: {}
                k:{"type":"Available"}:
                  .: {}
                  f:lastTransitionTime: {}
                  f:message: {}
                  f:reason: {}
                  f:status: {}
                  f:type: {}
                k:{"type":"Degraded"}:
                  .: {}
                  f:lastTransitionTime: {}
                  f:reason: {}
                  f:status: {}
                  f:type: {}
                k:{"type":"Progressing"}:
                  .: {}
                  f:lastTransitionTime: {}
                  f:message: {}
                  f:reason: {}
                  f:status: {}
                  f:type: {}
              f:relatedObjects: {}
              f:versions: {}
          manager: cluster-image-registry-operator
          operation: Update
          subresource: status
          time: "2025-08-13T20:32:38Z"
        name: image-registry
        ownerReferences:
        - apiVersion: config.openshift.io/v1
          controller: true
          kind: ClusterVersion
          name: version
          uid: afc2ae89-a344-4fa6-92a3-294edb109067
        resourceVersion: "77131"
        uid: f34b8002-1e36-4acd-a548-37ca46385a70
      spec: {}
      status:
        conditions:
        - lastTransitionTime: "2025-08-13T19:07:20Z"
          message: |-
            NodeCADaemonAvailable: The daemon set node-ca has available replicas
            Available: The registry is ready
            ImagePrunerAvailable: Pruner CronJob has been created
          reason: Ready
          status: "True"
          type: Available
        - lastTransitionTime: "2025-08-13T20:32:38Z"
          message: |-
            NodeCADaemonProgressing: The daemon set node-ca is deployed
            Progressing: The registry is ready
          reason: Ready
          status: "False"
          type: Progressing
        - lastTransitionTime: "2025-08-13T19:06:52Z"
          reason: AsExpected
          status: "False"
          type: Degraded
        extension: null
        relatedObjects:
        - group: imageregistry.operator.openshift.io
          name: cluster
          resource: configs
        - group: imageregistry.operator.openshift.io
          name: cluster
          resource: imagepruners
        - group: rbac.authorization.k8s.io
          name: system:registry
          resource: clusterroles
        - group: rbac.authorization.k8s.io
          name: registry-registry-role
          resource: clusterrolebindings
        - group: rbac.authorization.k8s.io
          name: openshift-image-registry-pruner
          resource: clusterrolebindings
        - group: ""
          name: openshift-image-registry
          resource: namespaces
        versions:
        - name: operator
          version: 4.20.0-0.ci-2025-08-13-182454-test-ci-op-5wilvz46-latest 

      One simple fix could be replacing the line breaks in the message.

      Attached inspect.zipfrom the above job's artifacts.

              hongkliu Hongkai Liu
              hongkliu Hongkai Liu
              None
              None
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: