Uploaded image for project: 'OpenShift Container Platform (OCP) Strategy'
  1. OpenShift Container Platform (OCP) Strategy
  2. OCPSTRAT-2818

Support for Provider-Supplied Initial Manifests in Hosted Clusters

XMLWordPrintable

    • Product / Portfolio Work
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • 8
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Feature Overview (aka. Goal Summary)

      Enable managed OpenShift providers (such as ROSA Cluster Service) to apply one-time initial configuration to Hosted Clusters during provisioning. These provider-supplied manifests are applied once when the cluster becomes available, after which full ownership transfers to the customer who can freely modify or delete the resources. This capability mirrors the --manifests directory functionality available in standalone OpenShift installations, adapted for the asynchronous, decoupled architecture of Hosted Control Planes.

      Goals (aka. expected user outcomes)

      Primary Persona: Managed OpenShift Provider (e.g., ROSA Cluster Service, ARO)

      Outcomes:
      Providers can specify Kubernetes manifests to be applied to newly provisioned Hosted Clusters as part of the cluster creation workflow

      Manifests are applied exactly once (one-shot semantics) after the guest cluster API becomes available

      After application, customers have full ownership of the applied resources and can modify or delete them without interference

      Providers have visibility into manifest application status via a condition on the HostedCluster resource

      The mechanism prevents accidental re-application of manifests on subsequent reconciliations

      Expanded Features:

      • Extends the HostedClusterConfigOperator (HCCO) reconciliation to support provider-supplied initial configuration
      • Leverages the existing KubeAPIServerAvailable condition as a trigger point

      Requirements (aka. Acceptance Criteria):

      Functional Requirements:

      • Provider can specify one or more ConfigMaps containing manifests to apply at cluster creation
      • Manifests are applied to the guest cluster after KubeAPIServerAvailable condition becomes True
      • Each manifest is applied exactly once; subsequent reconciliations do not re-apply or overwrite customer modifications
      • Application status is reported via a condition (e.g., InitialManifestsApplied) on the HostedCluster/HostedControlPlane
      • (Tentative) Multiple ConfigMaps are supported with deterministic ordering
      • If no manifests are specified, the cluster provisions normally with no additional behavior

      Non-Functional Requirements:

      • Security: The mechanism must prevent day-2 abuse (e.g., applying arbitrary manifests after initial provisioning)
      • Security: Only explicitly referenced ConfigMaps are applied; discovery-based approaches are not used
      • Reliability: Partial failures are handled gracefully with clear error reporting
      • Maintainability: Implementation uses existing HyperShift patterns (annotations, conditions, HCCO reconciliation)
      • Testability: Behavior is deterministic and can be verified through automated testing

       

      Deployment considerations:

      Deployment considerations List applicable specific needs (N/A = not applicable)
      Self-managed, managed, or both Primarily managed (ROSA, ARO), but available for self-managed HyperShift deployments
      Classic (standalone cluster) N/A - This feature is specific to Hosted Control Planes
      Hosted control planes Yes - This is the target deployment model
      Multi node, Compact (three node), or Single node (SNO), or all All node configurations supported
      Connected / Restricted Network Both supported; manifests are stored in ConfigMaps on the management cluster
      Architectures, e.g. x86_x64, ARM (aarch64), IBM Power (ppc64le), and IBM Z (s390x) All architectures supported by HyperShift
      Operator compatibility Requires coordination with ROSA Cluster Service and ARO for integration
      Backport needed (list applicable versions) TBD based on managed service requirements
      UI need (e.g. OpenShift Console, dynamic plugin, OCM) No UI required; API/annotation-based interface
      Other (please specify) N/A

      Use Cases (Optional):

      Use Case 1: ROSA HCP Default CMO Configuration

      • Actor: ROSA Cluster Service
      • Scenario: ROSA requires Hosted Clusters to have production-ready Cluster Monitoring Operator (CMO) configuration by default, including persistent storage and topology-aware scheduling
      • Flow:
        a. ROSA CS creates a ConfigMap with the default CMO configuration in the HCP namespace
        b. ROSA CS creates the HostedCluster with an annotation referencing the ConfigMap
        c. HyperShift provisions the cluster and applies the CMO configuration when the API is available
        d. Customer receives a cluster with production-ready monitoring; they can customize it as needed

      Use Case 2: Provider-Specific Security Defaults

      • Actor: Managed service provider
      • Scenario: Provider wants to apply security-related defaults (e.g., NetworkPolicy, SecurityContextConstraints) that meet their compliance requirements
      • Flow: Same as Use Case 1, with security-related manifests instead of CMO configuration

      Use Case 3: No Initial Manifests

      • Actor: Self-managed HyperShift user
      • Scenario: User creates a HostedCluster without specifying initial manifests
      • Flow: Cluster provisions normally with no additional configuration applied

      Questions to Answer (Optional):

      • Should the feature support Secrets in addition to ConfigMaps for sensitive bootstrap data?
      • (If using time restriction for initial manifest annotation) What is the appropriate time window for manifest application (proposed: 30 minutes from cluster creation)? 
      • How should partial application failures be handled (retry, rollback, or continue with error reporting)?
      • Should manifests be validated before application to prevent invalid resources?
      • Should the InitialManifestsApplied condition be bubbled up from HostedControlPlane to HostedCluster for easier observability?

      Out of Scope

      • Continuous reconciliation: This feature provides one-shot application only. Continuously reconciled configuration should use existing mechanisms (e.g., GitOps, operator-managed resources)
      • Manifest validation/admission: Manifests are applied as-provided; validation of manifest content is the provider's responsibility

      Background

      In standalone OpenShift, the installer provides a manifests/ directory where users can place custom Kubernetes manifests to be applied during cluster bootstrap. This is a synchronous, one-shot operation performed before the cluster is fully available.

      HyperShift's architecture differs fundamentally:

      • The control plane runs on a management cluster, separate from the data plane
      • There is no bootstrap node running the installer
      • The HostedClusterConfigOperator (HCCO) is responsible for applying configuration to the guest cluster

      This feature bridges the gap by providing equivalent functionality adapted for HyperShift's architecture, using the HCCO and existing conditions (KubeAPIServerAvailable) to determine when the guest cluster is ready to receive manifests.

      Related Tickets:

      • RFE-8036: Support for Default One-Shot Configuration in Hosted Clusters
      • ROSA-14: ROSA HCP: M3 Day-1 CMO Config with Day-2 Customization

      Customer Considerations

      Ownership: After initial application, customers have full ownership of the resources; providers do not re-apply or overwrite customer changes

      Discoverability: Customers should be able to identify which resources were applied by the provider (e.g., via labels or annotations on applied resources)

      Support: Support documentation should clarify the distinction between provider defaults and customer customizations

      Documentation Considerations

      Provider Documentation (ROSA/ARO internal):

      • How to create ConfigMaps with initial manifests
      • Annotation format for referencing manifests
      • How to verify successful application via conditions
      • Best practices for manifest content

      Customer Documentation:

      • Explanation that managed clusters may include provider-supplied default configuration
      • How to identify provider-applied resources
      • Confirmation that customers can modify or delete provider-applied resources
      • Guidance on when modifications might conflict with provider recommendations

      HyperShift Documentation:

      • Technical reference for the annotation and condition
      • Example ConfigMap format
      • Integration guidance for custom providers

      Interoperability Considerations

      Impacted Projects:

      Project Impact
      ROSA HCP Primary consumer; will use this to apply default CMO configuration
      ARO HCP Potential consumer for Azure-specific defaults
      ACM/MCE May leverage for multi-cluster management scenarios
      HyperShift (upstream) Implementation location

      Interoperability Test Scenarios:
      ROSA CS creates cluster with initial manifests → verify manifests applied and condition set

      ROSA CS creates cluster without initial manifests → verify normal provisioning

      Customer modifies provider-applied resource → verify modification persists across reconciliations

      Customer deletes provider-applied resource → verify deletion is respected

      Manifest application fails (invalid YAML, missing namespace) → verify error reported, cluster still functional

      Version Considerations:

      • Feature should be available in all HyperShift versions supporting ROSA HCP
      • Backward compatibility: Clusters without the annotation continue to work normally

              Unassigned Unassigned
              asegurap1@redhat.com Antoni Segura Puimedon
              None
              None
              None
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: