-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.14.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
Hypershift Sprint 261, Hypershift Sprint 262, Hypershift Sprint 263
-
3
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
For a cluster which is using 4.14 nightly build, the cluster is deleted but the workers node leak on AWS
how reproducible:
30%.
Steps to Reproduce:
- create a HCP cluster : rosa create cluster -c ci-rosa-h-f3jj -y --version 4.14.0-0.nightly-2024-08-19-005726
--channel-group nightly --region us-west-2
--role-arn arn:aws:iam::*************:role/ci-rosa-h-f3jj-HCP-ROSA-Installer-Role
--support-role-arn arn:aws:iam::*************:role/ci-rosa-h-f3jj-HCP-ROSA-Support-Role
--worker-iam-role arn:aws:iam::*************:role/ci-rosa-h-f3jj-HCP-ROSA-Worker-Role
--oidc-config-id 2d8jsbeov7frdt8hul52b79ip4ois7op
--operator-roles-prefix ci-rosa-h-f3jj
--subnet-ids subnet-09c3a478f244414b8,subnet-0bdbc4c665170c2b2
--http-proxy http://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX@ip-10-0-2-15.us-west-2.compute.internal:3128
--https-proxy https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX@ip-10-0-2-15.us-west-2.compute.internal:3128
--hosted-cp
--multi-az - do some day2 actions
- delete the cluster
Actual result :
The node pool instances are not deleted
Expected result:
It should be deleted
We have met it in the past. Some history
OHSS-35578 OCM-5638
Related thread:https://redhat-internal.slack.com/archives/C03UNV9DV9N/p1724135119939169