Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-23312

ClusterPermission reports success status even when ManfestWork fails

XMLWordPrintable

    • Quality / Stability / Reliability
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • Workload Mgmt Train 31 - 1, Workload Mgmt Train 31 - 2, Workload Mgmt Train 32 - 1, Workload Mgmt Train 32 - 2
    • Important
    • None

      Description of problem:

      When a ClusterPermission is created, the status looks like this:

      ubuntu@ubuntu2404:~/REPOS/console_mshort55$ oc get clusterpermission -n sno-2-b9657 sno-2-b9657-kubevirt-admin -oyaml
      ...
      status:
        conditions:
        - lastTransitionTime: "2025-08-05T20:29:24Z"
          message: |-
            Run the following command to check the ManifestWork status:
            kubectl -n sno-2-b9657 get ManifestWork sno-2-b9657-kubevirt-admin-5a886 -o yaml
          reason: AppliedRBACManifestWork
          status: "True"
          type: AppliedRBACManifestWork 

      If we rely on this, it would appear that the ClusterPermission was successfully applied. However if there is an underlying problem with the ManfiestWork, it is not reported on the ClusterPermission status:

      ubuntu@ubuntu2404:~/REPOS/console_mshort55$ kubectl -n sno-2-b9657 get ManifestWork sno-2-b9657-kubevirt-admin-5a886 -o yaml
      ...
      status:
        conditions:
        - lastTransitionTime: "2025-08-14T17:46:11Z"
          message: Failed to apply manifest work
          observedGeneration: 2
          reason: AppliedManifestWorkFailed
          status: "False"
          type: Applied
        - lastTransitionTime: "2025-08-10T10:29:02Z"
          message: All resources are available
          observedGeneration: 2
          reason: ResourcesAvailable
          status: "True"
          type: Available
        resourceStatus:
          manifests:
          - conditions:
            - lastTransitionTime: "2025-08-14T17:46:11Z"
              message: 'Failed to apply manifest: ClusterRoleBinding.rbac.authorization.k8s.io
                "sno-2-b9657-kubevirt-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io",
                Kind:"ClusterRole", Name:"kubevirt.io:adminn"}: cannot change roleRef'
              reason: AppliedManifestFailed
              status: "False"
              type: Applied
            - lastTransitionTime: "2025-08-10T10:29:02Z"
              message: Resource is available
              reason: ResourceAvailable
              status: "True"
              type: Available
            - lastTransitionTime: "2025-08-05T20:29:24Z"
              message: ""
              reason: NoStatusFeedbackSynced
              status: "True"
              type: StatusFeedbackSynced
            resourceMeta:
              group: rbac.authorization.k8s.io
              kind: ClusterRoleBinding
              name: sno-2-b9657-kubevirt-admin
              namespace: ""
              ordinal: 0
              resource: clusterrolebindings
              version: v1 

      This bug is to fix ClusterPermission status to report on errors with the underlying ManifestWork. Clients like UI should be able to rely on ClusterPermission status and not have to check underlying resources.

      Version-Release number of selected component (if applicable): ACM 2.14

      How reproducible:

      Every time.

      Steps to Reproduce:

      1. Create ClusterPermission successfully
      2. Edit ClusterPermission and change role name in roleRef
      3. ClusterPermission will not show the ManfiestWork error

      Actual results:

      ClusterPermission does not show an error status, even though ManifestWork has failed.

      Expected results:

      ClusterPermission should report failures in it's status for ManifestWork failures.

      Additional info:

              fxiang@redhat.com Feng Xiang
              rh-ee-mshort Matthew Short
              Atif Shafi Atif Shafi
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: