Uploaded image for project: 'Red Hat OpenStack Services on OpenShift'
  1. Red Hat OpenStack Services on OpenShift
  2. OSPRH-6932

Changes in initial rhoso18 BGP job required to run tests

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Done
    • Icon: Blocker Blocker
    • rhos-18.0.0
    • None
    • None
    • None

      While OSPRH-2146 will be closed soon, completing that story means we can deploy rhoso18+bgp using the downstream rhoso18 zuul CI.

      However, there are some changes necessary in order to properly test rhoso18+bgp and this ticket will address those changes:

      1. Add networkers to the setup: one limitations on rhoso18+bgp setups is that OVN GW ports cannot be scheduled on OCP workers.
        Why? ovn-bgp-agent doesn't run on OCP workers (we'd need metallb to support frr-k8s for this) and due to this, routes to GW ports could not be advertised from OCP workers, so dataplane connectivity would be affected.
      2. Distribute OCP nodes on separate racks. With OSPRH-2146, we implemented a simpler case where ocp-nodes are all located on a common rack. Apparently, customers are more interested in setups where OCP workers are distributed among the same racks where EDPM nodes are located.
      3. Create an extra OCP workers that only runs test pods. Without this, running scenario tests using test-operator is not possible. This OCP worker will have special configuration:
        1. test-operator pods will always be scheduled on this worker
        2. no other pods/services/resources will be scheduled on this worker
        3. it will be connected to the spine/leaf virtual infrastructure, routing traffic to the OSP provider network directly to the spines. I.e., it won't be included in any of the racks, but connected directly to the spines.

              eolivare Eduardo Olivares Toledo
              eolivare Eduardo Olivares Toledo
              rhos-dfg-networking-squad-bgp
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: