-
Feature
-
Resolution: Done
-
Critical
-
None
-
None
-
Upstream
-
0% To Do, 0% In Progress, 100% Done
-
L
-
1
-
0
Feature Overview (aka. Goal Summary)
Note: This feature is tracking work focused on updating the upstream CAPI. This does not impact 4.16 release.
Enable Service Consumer personas to lifecycle managed OpenShift (ROSA with Hosted Control Planes) via CAPI.
Goal
- As a Service Consumer, I want to be able to provision and lifecycle HCP clusters (on ROSA)
- As a Service Consumer, I want to CRUD cluster with some tweak knobs (what TBD), upgrade control plane, CRUD machine pools.
- Infuse upstream CAPI with ROSA Support.
https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues?q=is%3Aissue+is%3Aopen+rosa march-15target is listed here in the MVP-1 document
Target:
March-15-2024
March 20 2024 - Demo without BYO-OIDC demo-script
May 1 2024 - Demo with BYO-OIDC
Considerations
Once ROSACluster CRD is implemented in upstream to manage ROSA clusters, downstream bits are to be implemented to integrate with OpenShift/ROSA.
ROSA managed by Capi = ROSA -> ocm api (ideally) -> hypershift api -> capi -> nodes
the ROSA CAPI Provider will speak to the OCM API via github.com/openshift-online/ocm-sdk-go. The ROSA CLI cannot communicate to OCM via CAPI directly because OCM does not expose a Kubernetes CAPI Server (that would not scale). Instead, the user is expected to run a Kubernetes CAPI environment in their computing environment from which they wish to reconcile.
The initial request from the customer was to use CAPI as their authoritative source of truth for all their cluster fleet.This feature covers other bits like machinepools, auth-provider etc.
Acceptance criteria.
- As a Service Consumer, I should use upstream CAPI to provision ROSA+HCP cluster. As part of this account-wide roles and OIDC configuration should be done by CAPI.
- Following features should be supported by CAPI: adding SecurityGroups, 54 character cluster name, internal & BYO identity support, user-tags support, crio-o logging passthruough , set max node grace period to 1 week, private cluster, status of clusters, machinepool & controlplane updates, delete cluster.
- No CNI mode with Cilium
- Additional AWS security groups
- AWS resource tags
- >15 character cluster name
- nodeDrainGracePeriod can be set upto 1 week
- clones
-
OCPSTRAT-759 [Upstream] CAPZ provider for ARO with HCP
- Backlog
- is cloned by
-
OCPSTRAT-985 [Upstream] CAPI provider for ROSA with HCP - Phase 0
- Closed
-
OCPSTRAT-1139 [Upstream] CAPI provider for ROSA with HCP - Phase 2 (MVP-2)
- Closed
- links to