Uploaded image for project: 'Red Hat Advanced Cluster Management'
  1. Red Hat Advanced Cluster Management
  2. ACM-15634

Observability resources remains when deleting cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • ACM 2.10.0
    • Observability
    • False
    • None
    • False
    • None

      Description of problem:

      After deleting a cluster, using ZTP GitOps, some resources related to Observability remains at the hub cluster. And this cannot be cleanly removed.:

       

      $ oc get ns se..00152 -oyaml|grep message -A 2
          message: All resources successfully discovered
          reason: ResourcesDiscovered
          status: "False"
      --
          message: All legacy kube types successfully parsed
          reason: ParsedGroupVersions
          status: "False"
      --
          message: All content successfully deleted, may be waiting on finalization
          reason: ContentDeleted
          status: "False"
      --
          message: 'Some resources are remaining: manifestworks.work.open-cluster-management.io
            has 1 resource instances, rolebindings.authorization.openshift.io has 1 resource
            instances, rolebindings.rbac.authorization.k8s.io has 1 resource instances'
      --
          message: 'Some content in the namespace has finalizers remaining: cluster.open-cluster-management.io/manifest-work-cleanup
            in 3 resource instances'
          reason: SomeFinalizersRemain
      
      $ oc get ns se..00152
      NAME            STATUS        AGE
      se..00152   Terminating   5d21h 

       

       

      We can manually delete all the finalizers for the `manifestworks` and `rolebindings`, and it will finalize the cluster deletion.

      In the past, something similar was solved by changing argocd deletion mode:

      https://redhat-internal.slack.com/archives/CUU609ZQC/p1720772164011829?thread_ts=1719928137.929819&cid=CUU609ZQC

      but not seems to be the same case. Actually, this fix was already included (saving many clusters deletion errors).

      This seems to be a different case.

       

      Version-Release number of selected component (if applicable):

      ocp 4.14 and rhacm 2.10

      How reproducible:

      I cannot reproduce it. It happens few times, and we cannot find out the reason. We have a hub with hundreds of spokes, with many activity about creating/removing. So, this is happening like a few times by month.

      Steps to Reproduce:

      1.  
      2.  
      3. ...

      Actual results:

      Expected results:

      Additional info:

              mzardab@redhat.com Moad Zardab
              jgato@redhat.com Jose Gato Luis
              Xiang Yin Xiang Yin
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: