-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.17, 4.18, 4.19
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
IPI OCP 4.18.x Installation in Azure fail with error:
ERROR failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to create infrastructure manifest: admission webhook "validation.azurecluster.infrastructure.cluster.x-k8s.io" denied the request: AzureCluster.infrastructure.cluster.x-k8s.io "csaggin-repro-0419050-jzg58" is invalid: spec.networkSpec.apiServerLB.frontendIPConfigs[0].privateIP: Invalid value: "172.16.0.52": Internal LB IP address needs to be in control plane subnet range ([172.16.0.0/27])
The validation fails due to the wrong subnet association when the v-net are already existing.
It is very easy to reproduce:
1) Create the v-net subnet like:
priv-endpoint: 172.16.0.32/29 controlplan: 172.16.0.48/28 workers: 172.16.0.0/27
2) Associate the subnet in the install-config.yaml:
networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 172.16.0.0/26 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: default_domain cloudName: AzurePublicCloud outboundType: Loadbalancer region: eastus computeSubnet: workers networkResourceGroupName: csaggin_ocp_sp_rg controlPlaneSubnet: controlplan virtualNetwork: csaggin-vnet
3) Create the manifest from the install-config.yaml and check the 02_azure-cluster.yaml created and see the wrong subnet association:
frontendIPs: - name: csaggin-repro-0419050-hs6pc-internal-frontEnd privateIP: 172.16.0.52 ..... subnets: - cidrBlocks: - 172.16.0.0/27 name: controlplan - cidrBlocks: - 172.16.0.32/27 name: workers
Working 4.16: https://github.com/openshift/installer/blob/release-4.16/pkg/asset/manifests/azure/cluster.go
Not Working 4.18: https://github.com/openshift/installer/blob/release-4.18/pkg/asset/manifests/azure/cluster.go
As far I can see in this commit: https://github.com/openshift/installer/commit/9cc3209613f7b04a43b9d0686a17a517e1a42af8 there has been an enhancement of how the automatic v-net subnet creation are handled forcing them to be split, however when subnets get previously configured it introduce also this bug/issue.
For the moment the workaround founds are:
-) Installing an older version and excute the upgrade.
-) Manually modify the 02_azure-cluster.yaml file to match the right configure subnets.
- links to