Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-8440

Day 0 BGP Enablement for OpenShift Clusters

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • Network - Core
    • None
    • None
    • Product / Portfolio Work
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Title:
      Day 0 BGP Enablement for OpenShift Clusters

      Description:
      Enable it so that OpenShift clusters can be deployed on Day 0 utilizing BGP instead of having a dependency on either external load balancers or L2 adjacency.

       

      The end goal would be to have a BGP config be ingested at install time such that two things happen:
      1.  When the control plane finishes bootstrapping/installation all of the Control Plane nodes are advertising an anycast address for the API VIP into BGP

      2.  When the worker nodes finish bootstrapping/installation, all (of the appropriate) worker nodes are advertising an anycast address for the *.apps VIP into BGP

      Reasoning:
      As large customers are increasingly looking at OpenShift as a platform for not just containers, or VMs, but rather as a holistic approach to building a robust service-oriented private cloud, they are seeing that external dependencies should be reduced and/or eliminated.  External load balancers (such as F5, Citrix, A10, etc) represent a hurdle for OCP adoption in a couple of regards:

      • Cost
        • Hardware load balancers (F5, Citrix, A10, etc) represent significant infrastructure costs for customers
        • When a customer is deploying 10s or 100s of OpenShift clusters this present a significant overhead on these hardware load balancers resulting in the necessity to buy larger (or higher quantity of) devices
      • Choke-points
        • Hardware load balancers often constrict traffic for multiple OpenShift clusters to a single pair of hardware devices
        • A single configuration error across a pair of hardware load balancers can result in an outage that affects multiple OpenShift clusters
      • Inter-team friction
        • Network teams typically own the Load balancing equipment, so when application owners want things like additional VIPs (new clusters, endpoints, health checks, etc) they need to reach out to the network team to get them created
        • Moving the load balancing function effectively to BGP, it allows the application owners to use kubernetes primatives for all of those things (readiness probes, blue green deployments, scaling, etc)

       

      A primitive version of this is already started at one FSI customer, and others have expressed interest in this as well.  The initial early adopters of this would likely be larger customers who have significant OpenShift deployments.

              mcurry@redhat.com Marc Curry
              bmarlow@redhat.com Brandon Marlow
              None
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                None
                None