Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-54425

Manual removal of BareMetalHost does not affect node in HostedCluster's NodePool

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • 4.18
    • HyperShift / Agent
    • None
    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • None
    • None
    • None
    • In Progress
    • Release Note Not Required
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      
      For ZTP to work with Hypershift clusters (TELCOSTRAT-177), a member cluster's node set must be controllable by ZTP. ZTP should control this by deleting or otherwise marking nodes removed from the ClusterInstance in the management cluster so that Hypershift will appropriately cordon and remove the specific nodes addressed by ZTP.
      
      Currently, scaling down a NodePool results in a random node being drained and removed. Deleting a BMH manually from the management cluster causes no affect on the hosted cluster; it continues using the now unmanaged node.
      
          

      Version-Release number of selected component (if applicable):

      
      version   4.18.1    True        False         31d     Cluster version is 4.18.1
      advanced-cluster-management.v2.13.0-83
      
          

      How reproducible:

      Always
      
          

      Steps to Reproduce:

          1. Create a HostedCluster with N nodes
          2. Delete bmh of one of the nodes
          3. Scale down NodePool to N-1
          

      Actual results:

      Deleted BMH has no correlation to which node is removed from the hostedcluster.
          

      Expected results:

      Deleting a BMH should cordon and deprovision the hosted node before finalizing delete.
          

      Additional info:

      
      I also tried experimentally deleting a particular BMH THEN scaling down the NodePool, but the disassociation persisted. The bmh was node 1 but the scale down removed and deprovisioned node 2.
      
          

              cchun@redhat.com Crystal Chun
              rh_cwilkers Chandler Wilkerson
              None
              None
              Liangquan Li Liangquan Li
              None
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: