Uploaded image for project: 'OpenShift Container Platform (OCP) Strategy'
  1. OpenShift Container Platform (OCP) Strategy
  2. OCPSTRAT-264

Compute and control plane nodes on separate subnets for on-prem IPI [ Phase 1]

XMLWordPrintable

    • BU Product Work
    • False
    • Hide

      None

      Show
      None
    • False
    • 0% To Do, 0% In Progress, 100% Done
    • If Release Note Needed, Set a Value
    • Set a Value
    • 0

      Feature Overview

      Allow to configure compute and control plane nodes on across multiple subnets for on-premise IPI deployments. With separating nodes in subnets, also allow using an external load balancer, instead of the built-in (keepalived/haproxy) that the IPI workflow installs, so that the customer can configure their own load balancer with the ingress and API VIPs pointing to nodes in the separate subnets.

      Goals

      I want to install OpenShift with IPI on an on-premise platform (high priority for bare metal and vSphere) and I need to distribute my control plane and compute nodes across multiple subnets.

      I want to use IPI automation but I will configure an external load balancer for the API and Ingress VIPs, instead of using the built-in keepalived/haproxy-based load balancer that come with the on-prem platforms.

      Background, and strategic fit

      Customers require using multiple logical availability zones to define their architecture and topology for their datacenter. OpenShift clusters are expected to fit in this architecture for the high availability and disaster recovery plans of their datacenters.

      Customers want the benefits of IPI and automated installations (and avoid UPI) and at the same time when they expect high traffic in their workloads they will design their clusters with external load balancers that will have the VIPs of the OpenShift clusters.

      Load balancers can distribute incoming traffic across multiple subnets, which is something our built-in load balancers aren't able to do and which represents a big limitation for the topologies customers are designing.

      While this is possible with IPI AWS, this isn't available with on-premise platforms installed with IPI (for the control plane nodes specifically), and customers see this as a gap in OpenShift for on-premise platforms.

      Functionalities per Epic

       

      Epic Control Plane with Multiple Subnets  Compute with Multiple Subnets Doesn't need external LB Built-in LB
      NE-1069 (all-platforms)
      NE-905 (all-platforms)
      NE-1086 (vSphere)
      NE-1087 (Bare Metal)
      OSASINFRA-2999 (OSP)  
      SPLAT-860 (vSphere)
      NE-905 (all platforms)
      OPNET-133 (vSphere/Bare Metal for AI/ZTP)
      OSASINFRA-2087 (OSP)
      KNIDEPLOY-4421 (Bare Metal workaround)
      SPLAT-409 (vSphere)

      Previous Work

      Workers on separate subnets with IPI documentation

      We can already deploy compute nodes on separate subnets by preventing the built-in LBs from running on the compute nodes. This is documented for bare metal only for the Remote Worker Nodes use case: https://docs.openshift.com/container-platform/4.11/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configure-network-components-to-run-on-the-control-plane_ipi-install-installation-workflow

      This procedure works on vSphere too, albeit no QE CI and not documented.

      External load balancer with IPI documentation

      1. Bare Metal: https://docs.openshift.com/container-platform/4.11/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.html#nw-osp-configuring-external-load-balancer_ipi-install-post-installation-configuration
      2. vSphere: https://docs.openshift.com/container-platform/4.11/installing/installing_vsphere/installing-vsphere-installer-provisioned.html#nw-osp-configuring-external-load-balancer_installing-vsphere-installer-provisioned

      Scenarios

      1. vSphere: I can define 3 or more networks in vSphere and distribute my masters and workers across them. I can configure an external load balancer for the VIPs.
      2. Bare metal: I can configure the IPI installer and the agent-based installer to place my control plane nodes and compute nodes on 3 or more subnets at installation time. I can configure an external load balancer for the VIPs.

      Acceptance Criteria

      • Can place compute nodes on multiple subnets with IPI installations
      • Can place control plane nodes on multiple subnets with IPI installations
      • Can configure external load balancers for clusters deployed with IPI with control plane and compute nodes on multiple subnets
      • Can configure VIPs to in external load balancer routed to nodes on separate subnets and VLANs
      • Documentation exists for all the above cases

       

              ddharwar@redhat.com Deepthi Dharwar
              racedoro@redhat.com Ramon Acedo
              Chris Fields
              Mike Fiedler Mike Fiedler
              Stephanie Stout Stephanie Stout
              Chris Fields Chris Fields
              Votes:
              1 Vote for this issue
              Watchers:
              16 Start watching this issue

                Created:
                Updated:
                Resolved: