-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.18
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
In Progress
-
Release Note Not Required
-
None
-
None
-
None
-
None
-
None
Description of problem:
For ZTP to work with Hypershift clusters (TELCOSTRAT-177), a member cluster's node set must be controllable by ZTP. ZTP should control this by deleting or otherwise marking nodes removed from the ClusterInstance in the management cluster so that Hypershift will appropriately cordon and remove the specific nodes addressed by ZTP. Currently, scaling down a NodePool results in a random node being drained and removed. Deleting a BMH manually from the management cluster causes no affect on the hosted cluster; it continues using the now unmanaged node.
Version-Release number of selected component (if applicable):
version 4.18.1 True False 31d Cluster version is 4.18.1 advanced-cluster-management.v2.13.0-83
How reproducible:
Always
Steps to Reproduce:
1. Create a HostedCluster with N nodes 2. Delete bmh of one of the nodes 3. Scale down NodePool to N-1
Actual results:
Deleted BMH has no correlation to which node is removed from the hostedcluster.
Expected results:
Deleting a BMH should cordon and deprovision the hosted node before finalizing delete.
Additional info:
I also tried experimentally deleting a particular BMH THEN scaling down the NodePool, but the disassociation persisted. The bmh was node 1 but the scale down removed and deprovisioned node 2.
- is related to
-
OCPSTRAT-2203 Allow Node-Level Management of Nodes in HyperShift NodePools
-
- Backlog
-