-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
4.14
-
None
-
Important
-
No
-
Proposed
-
False
-
This is a clone of issue OCPBUGS-10504. The following is the description of the original issue:
—
Description of problem:
When you migrate a HostedCluster, the AWSEndpointService conflicts from the old MGMT Server with the new MGMT Server. The AWSPrivateLink_Controller does not have any validation when this happens. This is needed to make the Disaster Recovery HC Migration works. So the issue will raise up when the nodes of the HostedCluster cannot join the new Management cluster because the AWSEndpointServiceName is still pointing to the old one.
Version-Release number of selected component (if applicable):
4.12 4.13 4.14
How reproducible:
Follow the migration procedure from upstream documentation and the nodes in the destination HostedCluster will keep in NotReady state.
Steps to Reproduce:
1. Setup a management cluster with the 4.12-13-14/main version of the HyperShift operator. 2. Run the in-place node DR Migrate E2E test from this PR https://github.com/openshift/hypershift/pull/2138: bin/test-e2e \ -test.v \ -test.timeout=2h10m \ -test.run=TestInPlaceUpgradeNodePool \ --e2e.aws-credentials-file=$HOME/.aws/credentials \ --e2e.aws-region=us-west-1 \ --e2e.aws-zones=us-west-1a \ --e2e.pull-secret-file=$HOME/.pull-secret \ --e2e.base-domain=www.mydomain.com \ --e2e.latest-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.previous-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.skip-api-budget \ --e2e.aws-endpoint-access=PublicAndPrivate
Actual results:
The nodes stay in NotReady state
Expected results:
The nodes should join the migrated HostedCluster
Additional info:
- clones
-
OCPBUGS-10504 AWSPrivateLink is not updated on conflicting entries with VPCEndpointServcieName field
- Closed