Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-10766

Octavia without an OpenShift ovncontroller

XMLWordPrintable

    • Icon: Epic Epic
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • None
    • octavia-operator
    • None
    • Octavia without in cluster OVS
    • 5
    • False
    • Hide

      None

      Show
      None
    • False
    • Not Selected
    • Proposed
    • Proposed
    • To Do
    • Proposed
    • Proposed
    • 100% To Do, 0% In Progress, 0% Done

      Octavia on OpenShift requires a network connection to the amphorae, aka. the load balancer management network. In RHOSP, this was achieved by creating a virtual OVS interface on the OVS integration bridge each controller or networker and configuring it with properties to match it to a neutron port for that interface so neutron could bind it and allow network traffic between the agents running on the controller or networker and the amphora running in the cluster. The approach had certain advantages in that it did not require any additional network provisioning which meant it was easy to deploy and outside of some octavia specific complexity, didn't require special handling in the rest of the framework. It worked fine for composable roles because the ansible code that configured these things was portable across most role combinations. While it worked fine it was considered a bit of a hack as it was far outside the normal use of neutron.
       
      For RHOSO- we have a different situation. We could no longer create a virtual interface to "tap into" the integration bridge - it would require octavia specific support to be added to the ovn controller and even if it were possible to do so it still begged the question of how to establish the connectivity with the Octavia agents running in different pods - possibly on other worker nodes. However OpenShift networking is more flexible that TripleO was so we could instead use normal Neutron networking techniques to create the load balancer management network. See https://github.com/openstack-k8s-operators/octavia-operator/blob/main/MANAGEMENT_NETWORK.md for an overview of how it works. Our initial focus for RHOSO 18 was on deployments without dedicated networker nodes as there was information that prospective customers at the time were strongly not in favour of having hardware dedicated to networker nodes. Consequently all of our testing and development was focused on having a neutron controlled OpenvSwitch pod in the OpenShift cluster.
       
      That being said, using standard neutron networking here might pay off in overall flexibility. While we didn't test it, it isn't immediately obvious that it cannot be made to work. These are the steps as I see it
       

      • Make sure Octavia operator doesn’t have a hard dependency on the existence of OVN in the cluster
      • Try it by:
      • Extending Octavia network onto networker node
      • Networker node configured with the bridge mapping to connect the octavia provider network to that network (see previously linked document for how it is in-cluster)
      • Enabling Octavia and running some tests.
      • If it actually worked - to add actual support we would need to:
      • Modify relevant CI job definitions to allow the octavia network to extend to the networker nodes.
      • Confirm all is well and add to the CI matrix
         
        I'm not certain it will work but I can't think of a reason off the top of my head why it would not.

              rhn-engineering-beagles Brent Eagles
              rhn-engineering-beagles Brent Eagles
              rhos-dfg-networking-squad-vans
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: