-
Feature
-
Resolution: Unresolved
-
Critical
-
None
-
None
-
None
Feature Overview (aka. Goal Summary)
Introduce a proxy component that fronts the Kubernetes API server in Hosted Control Planes and dynamically adapts Cluster API (CAPI) custom resource group names based on the client requesting the resources. This enables seamless interoperability between management clusters and hosted clusters running different Kubernetes/CAPI versions.
Goals (aka. expected user outcomes)
- Hosted Control Planes clients using different CAPI API versions can interact with management clusters running other CAPI versions without modification
- Prevent collisions between incompatible CAPI custom resources in MC and HC (typically running different versions, now that CAPI will be
- Zero client-side changes required when hosted clusters upgrade to newer Kubernetes/CAPI versions
- Reduced coordination burden between management cluster and hosted cluster upgrade cycles
- Seamless migration of existing management clusters to the new CAPI types
Requirements (aka. Acceptance Criteria)
Functional Requirements
- Proxy intercepts Cluster API custom resource requests (Machine, MachineSet, MachineDeployment, Cluster, etc.)
- Proxy detects client's expected API version from request headers or API path
- Proxy converts requests to the target cluster's supported CAPI version
- Proxy converts responses back to the client's expected version
- Supports bidirectional version conversion (upgrade and downgrade paths)
- Preserves all semantic meaning during conversion with clear handling of version-specific fields
- Migration to the new CAPI types does not incur in downtime nor any other kind of disruption
Non-functional Requirements
- Observability: Metrics exposed for conversion operations, errors, and latency
- Security: Proxy does not bypass authentication/authorization; passes through credentials
Deployment Considerations
| Consideration | Requirement |
|---|---|
| Self-managed, managed, or both | Both |
| Classic (standalone cluster) | N/A |
| Hosted control planes | Required |
| Multi node, Compact, or Single node | All |
| Connected / Restricted Network | Both |
| Architectures | x86_64, arm64 |
| Backport needed | TBD based on target release |
| UI need | No |
Use Cases (Optional)
- Management Cluster Upgrade Lag:
- Management cluster runs CAPI v1beta1
- Hosted cluster upgraded to Kubernetes version with CAPI v1beta2
- Proxy converts v1beta1 requests to v1beta2, enabling continued management without upgrading management cluster
- Hosted Cluster Upgrade Lead:
- Hosted cluster upgraded ahead of management tooling
- Existing automation continues to work via proxy adaptation
- Gradual tooling migration to new API version at operator's pace
- Multi-Version Fleet Management:
- Single management plane manages hosted clusters across multiple Kubernetes versions
- Proxy handles version differences transparently
- Operators use consistent API version regardless of target cluster version
Questions to Answer (Optional)
- Which CAPI versions will be supported for conversion (v1beta1, v1beta2)?
- How should fields that exist only in newer versions be handled when downgrading?
- Should the proxy support non-CAPI custom resources (e.g., infrastructure provider CRDs)?
- What is the deployment model—sidecar, standalone service, or integrated into existing component?
Out of Scope
- Non-Cluster API custom resources (initial scope)
- Schema validation beyond what the target API server provides
- Client library changes or SDK modifications
Background
As OpenShift and Kubernetes release cycles continue, the Cluster API project evolves with breaking API version changes. Hosted Control Planes creates a unique challenge: the management cluster and hosted clusters may run different Kubernetes versions with incompatible CAPI API versions. Currently, this requires tight coupling between management tooling upgrades and hosted cluster upgrades, creating operational friction.
This proxy decouples these upgrade cycles by providing runtime API version adaptation, similar to how kube-apiserver handles core API version conversion but extended to CAPI custom resources.
Customer Considerations
- Keeps customers ability to upgrade hosted clusters independently of management infrastructure
- Reduces maintenance windows by decoupling upgrade dependencies
- Simplifies multi-cluster fleet management across version boundaries
- Critical for large-scale managed service offerings (ROSA HCP, ARO HCP) where version heterogeneity is common
Documentation Considerations
- Architecture documentation explaining proxy placement and request flow
- Supported version matrix (which CAPI versions can convert to which)
- Operational guide for monitoring and troubleshooting conversion issues
- Migration guide for customers currently managing version alignment manually
Interoperability Considerations
- Primary Impact: HyperShift, ROSA HCP, ARO HCP
- Dependencies: Cluster API, HyperShift operator, kube-apiserver
- Test Scenarios:
- Cross-version CRUD operations for all CAPI resources
- Version upgrade/downgrade conversion accuracy
- Performance impact under load
- Failure mode testing (proxy unavailable, conversion errors)
- Integration with existing HyperShift controllers