Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-8712

Support OCP on OCP Virt as a first-class citizen with standalone control plane

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • None
    • None
    • None
    • Product / Portfolio Work
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      The target of the RFE is to get OpenShift on OpenShift with a standalone control plane into a first-class citizen.  OpenShift Virtualization should not be in the non-tested platform area. (Source: KSC - OpenShift on OpenShift Virtualization support statement)

      Expected outcome of this RFE

      • Clear support statement for a standalone control plane running on OpenShift Virtualization, including QE test.
        OpenShift Virtualization should be listed as “Tested Infrastructure Layers” at KSC - OpenShift Container Platform 4.x Tested Integrations (for x86_x64)
      • Available documentation on how to install OpenShift on OpenShift Virtualization, first non-integrated. Kind of UPI and/or Agent-Base installation.

      Related RFEs

      • Support for KubeVirt CSI, RFE already created: RFE-5612 - Support for kubevirt-csi-driver on standalone RHOCP / non-HCP clusters  (raised by customer)
      • Cloud provider for an integrated installation via UPI and/or Agent-Base installation. RFE is not available yet, but cloud provider is needed for the full IPI installer (next bullit point).
      • Full IPI installer capabilities. Available issue: CNV-75374 - OCPv provider for OCP (IPI installation on OCPv)
      • Telco space is looking into a similar approach as well: TELCOSTRAT-325 - P6 Virtual CP on OCP-Virt 

      Following are customer use cases where a hosted control plane is not the right choice:

      Customers want to replace a hypervisor (like VMware/vSphere) with OpenShift Virtualization

      Long-term OpenShift customers running OpenShift on any hypervisor have a number of automated cluster configurations via Ansible or a GitOps approach. 

      At the first step, they want to replace just the hypervisor layer. And focus on introducing Bare-Metal OpenShift to the operations team and adjust the automation of how to deploy a cluster on another hypervisor and STAY with the cluster behavior and cluster configuration.

      Switching from a standalone control plane to a hosted control plane at that point causes a high change rate in the operations team and can cause instability in the offered OpenShift service. In the long mid term, a hosted control plane is a valid option.

      Additionally, a cluster installed without any integration can easily be migrated via the migration toolkit for virtualization. (online or offline)

      It also helps customers to optimize their costs, using OVE for bare metal hypervisors.

      Customers want to place the entire OpenShift cluster into a separate network segment

      The customer would like to provide an OpenShift cluster in a separate network segment. Technically, it’s possible with a hosted control plan. But it is way more complex than running control plane virtual machines in a separate network. High-level technical comparison.

       

      Hosted control plane Standalone control plane
      • Configure to run MetalLB in the network segment for API Kubernetes Service type Load Balancer.
      • The ingress of the infra/bare-metal cluster has to be available in the particular network segment, because the worker fetches the ignition configuration from the infra/bare-metal cluster. (Which is for some customers a security risk)
      • The ingress of the hosted cluster has to be handled 
      • via MetallLB in the hosted cluster
      • Or via MetallLB / Ingress in the infra/bare-metal cluster  (Which is for some customers a security risk)
      • Attach the control plane and worker virtual machine to the network segment via Linux Bridge or Localnet.
      Complex OpenShift configuration from an operations perspective. Additionally, it is complex to debug in terms of network issues.  Straightforward configuration and easy to understand.

       

       

      Customers want to separate cluster credentials / access to different ops teams

      In highly regulated environments, the cluster credentials would rather not be shared between different teams. For example:

      • virt-team provides an Infra/bare-metal  OpenShift cluster.  
      • ocp-team operator the clusters running on the infra/bare-metal OpenShift Cluster

      With a hosted control plane, the virt-team have easy access to the cluster-admin credentials to all hosted clusters.

      With a standalone control plane, the virt-team has access to the virtual machine. And this is well accepted in the industry. Easy access to the OpenShift cluster running in a virtual machine is not available because the cluster credentials are NOT stored in the infra/bare-metal OpenShift cluster. 

      Customers regulated by national security agency

      On the topic of sovereignty, some customers want to obtain specific sovereignty qualification to host specific critical workloads. For example in France, the ANSSI (French national IT security agency) can qualify a platform with the SecNumCloud qualification. We are working with the ANSSI agency to present HCP and perform some pentest but historically, the ANSSI agency doesn’t trust on kubernetes isolation and prefer virtualization isolation.

      So customers don't want to wait for HCP validation by ANSSI agency before to deploy OCP on OCP. The supportability without HCP may benefit for them.

       

              sstout@redhat.com Stephanie Stout
              rbohne Robert Bohne
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                None
                None