-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.19
-
None
-
Quality / Stability / Reliability
-
False
-
-
3
-
Important
-
None
-
None
-
None
-
None
-
AUTOSCALE - Sprint 279, AUTOSCALE - Sprint 280
-
2
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
When a cluster has multiple node groups managed by Cluster Autoscaler, if one node group fails to find a template node—for any reason such as: - All nodes being tainted as unschedulable (related to OCPBUGS-57131) - No nodes in ready status Then all other node groups in the cluster also stop scaling up until that problematic node group returns to normal. This occurs even when the pending pods could be scheduled on the other healthy node groups.
Version-Release number of selected component (if applicable):
- Observed in: 4.19.15 (HCP cluster) - Likely affects: All versions with Cluster Autoscaler
How reproducible:
Consistently reproducible when one node group enters a state where template node cannot be determined.
Steps to Reproduce:
1. HyperShift cluster with multiple node pools (e.g., 3 node pools: m7i-2xlarge, m7a-8xlarge, rosa-core-0) 2. Cause one node pool to lose its template (e.g., delete and recreate a node pool, causing nodes to be in non-ready state temporarily, or have all nodes tainted) 3. Deploy pods that require resources and should trigger autoscaling 4. Observe that pods remain pending even though other healthy node groups could accommodate them
Actual results:
E1015 11:54:15.119968 1 static_autoscaler.go:518] Failed to scale up: could not get upcoming nodes: failed to find template node for node group MachineDeployment/ocm-production-2jb1l855j59bj002nqjmnht6p8237uc5-rosaint-use1-t/rosaint-use1-t-r7-2xlarge-0 - The error occurs even when the pending pods have tolerations and could be scheduled on other node groups - All node groups stop scaling, not just the problematic one - Multiple regular pods remain in Pending state across the cluster
Expected results:
- Cluster Autoscaler should skip the problematic node group and continue evaluating other healthy node groups for scale-up - Only pods specifically requiring the problematic node group (via nodeSelector, affinity, or unique tolerations) should remain pending - Pods that can be scheduled on healthy node groups should trigger scale-up on those groups
Additional info: