Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-6454

Add ability to delete storage nodes using GitOps workflow without needing any manual intervention

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • Storage
    • None
    • False
    • None
    • False
    • Not Selected

      1. Proposed title of this feature request

      Add ability to delete worker nodes using GitOps workflow without needing any manual intervention

      2. What is the nature and description of the request?

      We have three storage nodes i.e storage0,storage1 and storage2 running in our managed cluster. The test intention is to remove one of the storage node from healthy cluster and re-add the new node and make sure the ceph comes back to healthy state. In our case we will be removing one identified storage node using siteconfig and will cleaning the disks and re-adding the same node back to the managed cluster. We have identified storage0 to test the Physical node replacement of RHODF node in our lab and I have attached the document with the screenshots and procedure we have followed.

      We have noticed the below issues:

      1. Once the node deletion is started, the node is successfully deleted from bmh but storage0 node is not removed successful and is in NotReady state. We have removed it manually using the command “oc delete node”.
      2. Once the new node is added back to the cluster(In our case it’s the same node after cleaning the disks), the node is provisioned successfully and bmh status is updated as Provisioned but the node is still in “Not Ready” state.
      3. We have noticed that the cluster has Pending csr for storage0 node and Once approved the csr’s the storage0 node came to Ready state without worker role. Only storage role is assigned to the node.

      We have done the same procedure for worker node and we did not face any of the above issues. Its removed successfully from both bmh and cluster and when we re-added the new node(In our case it’s the same node) the node is added to the cluster without approving any csr or manual intervention.

      3. Why does the customer need this? (List the business requirements here)
      Customer needs this to be able to redeploy the storage nodes with gitops ztp.

      4. List any affected packages or components.

              rh-gs-gcharot Gregory Charot
              rhn-support-mlele Mihir Lele
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: