Uploaded image for project: 'OpenStack as Infra'
  1. OpenStack as Infra
  2. OSASINFRA-2804

Enable OVS Hardware offload - GA

XMLWordPrintable

    • OVS Hardware offload
    • False
    • False
    • Done
    • OCPPLAN-6495 - ShiftonStack Enable Telco/NFV 5G core and edge/RAN
    • Impediment
    • 0% To Do, 0% In Progress, 100% Done
    • Undefined

      Background

      Containerized Network Functions (CNFs) are the evolution of network functions in the Cloud and very often are deployed on the same hardware that is already used for the Virtualized network Functions (VNFs).

      In OpenShift 4.8, we started to support features for SR-IOV and DPDK, by allowing the users to add additional networks to the workers and create Neutron ports with direct type.

      In 4.9, we will enable more advanced use-cases, like Hardware offload, which brings two benefits:

      • Provide greater throughput
      • Increase CPU core efficiency and scalability

      Goals

      • (IPI) Implement the missing features in the installer and its dependencies (e.g. CAPO) to properly configure the Neutron ports
      • Enable developers to test the feature with a minimal viable environment
      • Define a Testing Strategy for OVS TC Flower Offload
      • Document how to use the feature and optionally provide UPI examples.
      • Run benchmark and validate the results

      Summary

      OVS Hardware offload is part of the technologies offered by SR-IOV and most of the work is done by the system tools, OpenStack and the SRIOV network operator. Some integration, testing and document has to happen on our side to enable our customers to use that feature in production.

      Definition of Done

      • Feature is complete (code + doc + tests)
      • Developers can reproduce an environment with OVS Hardware offload and test the feature while developing it
      • QE has a testing strategy and can successfully run the tests on time for OCP 4.9
      • The feature is documented.
      • Blog post

      Notes from planning

      • Most of the configuration happens in OpenStack
      • QE has all the needed hardware (via the new Telco lab). Hardware needs to be configured to test any of the NFV scenarios.
      • QE's lab needs to add support for Mellanox (25G nics) - on TRex, we have Intel 10G (for traffic generation)
      • QE has no CI jobs yet, and still working on the new Telco lab. Automation is in progress.
      • QE will do performance testing
      • UPI is the target now (and what's tested by QE)
      • Write a blog post

              emacchi@redhat.com Emilien Macchi
              mdemaced Maysa De Macedo Souza
              Ziv Greenberg Ziv Greenberg
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: