-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.14.z, 4.15.z, 4.16.z
Description of problem:
There are lots of customers that deploy cluster that are not directly connected to Internet so they use a corporate proxy. Customers have been unable to correctly understand how to configure cluster wide proxy for a new HostedCluster and they are finding issues to deploy the HostedCluster For example, given the following configuration: -- apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: creationTimestamp: null name: cluster-hcp namespace: clusters spec: configuration: proxy: httpProxy: http://proxy.testlab.local:80 httpsProxy: http://proxy.testlab.local:80 noProxy: testlab.local,192.168.0.0/16 -- A customer normally would add the MachineNetwork CIDR and local domain to the noProxy variable. However this will cause a problem in Openshift Virtualization. Hosted Control Plane KAS won't be able to contact node's kubelet since pods will try to reach tcp/10250 through the proxy, causing an error. So in this scenario, it is needed to add the Hub cluster ClusterNetwork CIDR to the noProxy variable: -- noProxy: testlab.local,192.168.0.0/16,10.128.0.0/14 -- However, I was unable to find this information in our documentation. Also, there is a known issue that is explained in the following kcs: https://access.redhat.com/solutions/7068827 The problem is, the Hosted Cluster deploys the control-plane-operator binary instead of the haproxy binary in kube-apiserver-proxy pods under kube-system in the HostedCluster. The kcs explains that the problem is fixed but It is not clear for customer what subnetwork should be added to the noProxy to trigger the logic that deploys the haproxy image so the proxy is not used to expose the kubernetes internal endpoint (172.20.0.1). The code seems to compare if the HostedCluster Clusternetwork (10.132.0.0/14) or ServiceNetwork (172.31.0.0/16) or the internal kubernetes address (172.20.0.1) is listed in the noProxy variable, to honor the noProxy setting and deploy haproxy images. This lead us to under trial and error find the correct way to honor the noProxy and allow the HostedCluster to work correctly and be able to connect from kube-apiserver-proxy pods to hosted KAS and also connect from hosted KAS to kubelet bypassing the cluster wide proxy. The questions are: 1.- Is it possible to add the information in our documentation about what is the correct way to configure a HostedCluster using noProxy variables? 2.- What is the correct subnet that needs to be added to the noProxy variable so the haproxy images are deployed instead of control-plane operator and allow kube-apiserver-proxy pods to bypass the cluster-wide proxy?
Version-Release number of selected component (if applicable):
4.14.z, 4.15.z, 4.16.z
How reproducible:
Deploy a HostedCluster using noProxy variables
Steps to Reproduce:
1. 2. 3.
Actual results:
Components from Hosted Cluster are still using the proxy not honoring the noProxy variables set.
Expected results:
Hosted Cluster should be able to deploy correctly.
Additional info:
- blocks
-
OCPBUGS-44114 HostedCluster failing when a cluster wide proxy is used in the HostedCluster manifest.
- ON_QA
- is cloned by
-
OCPBUGS-44114 HostedCluster failing when a cluster wide proxy is used in the HostedCluster manifest.
- ON_QA
- links to
-
RHEA-2024:6122 OpenShift Container Platform 4.18.z bug fix update