-
Feature
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
Product / Portfolio Work
-
None
-
False
-
-
False
-
None
-
None
-
None
-
None
-
None
-
-
None
-
None
-
None
-
None
Feature Overview (aka. Goal Summary)
HostedControlPlane resources, mostly derived from HostedCluster, drive a lot of the configuration that eventually populates the deployments for the various control plane resources (for example, kube-apiserver). One exclusion to this is the proxy settings. Instead of being populated by the HostedControlPlane, it uses the hosting cluster's cluster-wide proxy. The `spec.configuration.proxy.noProxy` field in a HostedCluster manifest is not being applied to the kube-apiserver.
It makes sense that an HCP's proxy settings be driven by the HostedControlPlane resource as well.
Goals (aka. expected user outcomes)
More efficient configuration that allows proxy settings for HCP clusters to be specific and isolated from the hosting cluster. Also, this lines up more with other configurations in HostedCluster that influence how these deployments are defined
Requirements (aka. Acceptance Criteria):
By changing the proxy configuration of the HostedCluster, such as an additional entry in the noProxy field, we should see this be applied to the HostedControlPlane resource for the cluster. Next, we should see the operator reconcile this with the kube-apiserver deployment and update the NO_PROXY environment variables of the deployment. Lastly, a rollout restart would be applied to the deployment to pick up the change.
Anyone reviewing this Feature needs to know which deployment configurations that the Feature will apply to (or not) once it's been completed. Describe specific needs (or indicate N/A) for each of the following deployment scenarios. For specific configurations that are out-of-scope for a given release, ensure you provide the OCPSTRAT (for the future to be supported configuration) as well.
Deployment considerations | List applicable specific needs (N/A = not applicable) |
Self-managed, managed, or both | |
Classic (standalone cluster) | |
Hosted control planes | Proxy configuration |
Multi node, Compact (three node), or Single node (SNO), or all | |
Connected / Restricted Network | |
Architectures, e.g. x86_x64, ARM (aarch64), IBM Power (ppc64le), and IBM Z (s390x) | |
Operator compatibility | |
Backport needed (list applicable versions) | |
UI need (e.g. OpenShift Console, dynamic plugin, OCM) | |
Other (please specify) |
Use Cases (Optional):
Include use case diagrams, main success scenarios, alternative flow scenarios. Initial completion during Refinement status.
<your text here>
Questions to Answer (Optional):
Include a list of refinement / architectural questions that may need to be answered before coding can begin. Initial completion during Refinement status.
<your text here>
Out of Scope
High-level list of items that are out of scope. Initial completion during Refinement status.
<your text here>
Background
Finding in https://issues.redhat.com/browse/OCPBUGS-61240
Documentation Considerations
Interoperability Considerations
Openshift Container Platform 4.18