-
Bug
-
Resolution: Not a Bug
-
Critical
-
None
-
4.14.z
-
Critical
-
No
-
Hypershift Sprint 250, Hypershift Sprint 251, Hypershift Sprint 252
-
3
-
False
-
Description of problem:
Customer wants 2 nodepools in 2 different subnets (subnet-a and subnet-b) in the same AZ. VPC Endpoint can only go in subnet-a or subnet-b VPC Endpoints can only be associated with one subnet per AZ per VPC, but customers can supply multiple subnets per AZ per VPC for their nodepools.
Version-Release number of selected component (if applicable):
4.14.9
How reproducible:
Steps to Reproduce:
1. Create a HCP cluster and provide 2 subnets or create another nodepool that run into a different subnet 2. awsendpointservice failed with "failed to create vpc endpoint: DuplicateSubnetsInSameZone" 3. The error with control plane operator: { "level": "error", "ts": "2024-02-19T03:02:21Z", "msg": "failed to modify vpc endpoint", "controller": "awsendpointservice", "controllerGroup": "hypershift.openshift.io", "controllerKind": "AWSEndpointService", "AWSEndpointService": { "name": "private-router", "namespace": "ocm-production-292mlvmsfvo3jf27iclduu60885o22fq-aisrhods-chris" }, "namespace": "ocm-production-292mlvmsfvo3jf27iclduu60885o22fq-aisrhods-chris", "name": "private-router", "reconcileID": "e292bd47-02fa-4fcd-8ac7-ae164c5af266", "error": "DuplicateSubnetsInSameZone: Found another VPC endpoint subnet in the availability zone of subnet-06f0819a60ec83b06. VPC endpoint subnets should be in different availability zones supported by the VPC endpoint service.\n\tstatus code: 400, request id: 1c0b22ce-5d21-49bd-880b-c6b5c4e16708", "stacktrace": "github.com/openshift/hypershift/control-plane-operator/controllers/awsprivatelink.(*AWSEndpointServiceReconciler).reconcileAWSEndpointService\n\t/hypershift/control-plane-operator/controllers/awsprivatelink/awsprivatelink_controller.go:473\ngithub.com/openshift/hypershift/control-plane-operator/controllers/awsprivatelink.(*AWSEndpointServiceReconciler).Reconcile\n\t/hypershift/control-plane-operator/controllers/awsprivatelink/awsprivatelink_controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234" }
Actual results:
CX tried to stop the instances and cluster-api failed to drain when it tried to scale up
Expected results:
There is no error for creating the VPCE
Additional info: