Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-902

E830 card: the ovs kernel pvp performance got 0 result due to the testpmd inside guest does not forward packet

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • Hide

      Given a system is set up with OVS and DPDK configured on an E830 card on RHEL 9. A VM is set up to forward packets with testpmd in I/O forwarding mode,
      When a packet-forwarding test is initiated within the testpmd environment,

      Then the testpmd instance should successfully receive and forward packets.

      Show
      Given a system is set up with OVS and DPDK configured on an E830 card on RHEL 9. A VM is set up to forward packets with testpmd in I/O forwarding mode, When a packet-forwarding test is initiated within the testpmd environment, Then the testpmd instance should successfully receive and forward packets.
    • rhel-sst-network-fastdatapath
    • ssg_networking

      Description of problem:
      E830 card: add dpdk port to ovs bridge failed

      Version-Release number of selected component (if applicable):
      [root@wsfd-advnetlab151 ~]# uname -r
      5.14.0-522.el9.x86_64
      [root@wsfd-advnetlab151 ~]# rpm -qa|grep openvs
      openvswitch-selinux-extra-policy-1.0-36.el9fdp.noarch
      openvswitch3.4-3.4.0-9.el9fdp.x86_64
      penvswitch3.3-3.3.0-49.el9fdp.x86_64

      How reproducible:
      Steps to Reproduce:
      #Build ovs kernel pvp topo:
      [root@wsfd-advnetlab151 perf]# ovs-vsctl show
      b86057b0-ad36-475e-acb0-072b7e51dd70
      Bridge ovsbr0
      Port eno3np0
      Interface eno3np0
      Port tap_vnet1
      Interface tap_vnet1
      Port tap_vnet2
      Interface tap_vnet2
      Port eno4np1
      Interface eno4np1
      Port ovsbr0
      Interface ovsbr0
      type: internal
      ovs_version: "3.4.0-9.el9fdp"
      #define guest with following xml

      [root@wsfd-advnetlab151 perf]# virsh dumpxml g1
      <domain type='kvm' id='1'>
        <name>g1</name>
        <uuid>b7227f0a-3675-4ef8-9f69-fbb61d67c9cf</uuid>
        <memory unit='KiB'>8388608</memory>
        <currentMemory unit='KiB'>8388608</currentMemory>
        <memoryBacking>
          <hugepages>
            <page size='1048576' unit='KiB'/>
          </hugepages>
          <locked/>
          <access mode='shared'/>
        </memoryBacking>
        <vcpu placement='static'>3</vcpu>
        <cputune>
          <vcpupin vcpu='0' cpuset='4'/>
          <vcpupin vcpu='1' cpuset='58'/>
          <vcpupin vcpu='2' cpuset='2'/>
        </cputune>
        <numatune>
          <memory mode='strict' nodeset='0'/>
        </numatune>
        <resource>
          <partition>/machine</partition>
        </resource>
        <os>
          <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
          <boot dev='hd'/>
        </os>
        <features>
          <acpi/>
          <apic/>
          <pmu state='off'/>
          <vmport state='off'/>
          <ioapic driver='qemu'/>
        </features>
        <cpu mode='host-passthrough' check='none' migratable='on'>
          <feature policy='require' name='tsc-deadline'/>
          <numa>
            <cell id='0' cpus='0-2' memory='8388608' unit='KiB' memAccess='shared'/>
          </numa>
        </cpu>
        <clock offset='utc'>
          <timer name='rtc' tickpolicy='catchup'/>
          <timer name='pit' tickpolicy='delay'/>
          <timer name='hpet' present='no'/>
        </clock>
        <on_poweroff>destroy</on_poweroff>
        <on_reboot>restart</on_reboot>
        <on_crash>restart</on_crash>
        <pm>
          <suspend-to-mem enabled='no'/>
          <suspend-to-disk enabled='no'/>
        </pm>
        <devices>
          <emulator>/usr/libexec/qemu-kvm</emulator>
          <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/var/lib/libvirt/images/g1.qcow2' index='1'/>
            <backingStore/>
            <target dev='vda' bus='virtio'/>
            <alias name='virtio-disk0'/>
            <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </disk>
          <controller type='usb' index='0' model='none'>
            <alias name='usb'/>
          </controller>
          <controller type='pci' index='0' model='pcie-root'>
            <alias name='pcie.0'/>
          </controller>
          <controller type='pci' index='1' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='1' port='0x10'/>
            <alias name='pci.1'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
          </controller>
          <controller type='pci' index='2' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='2' port='0x11'/>
            <alias name='pci.2'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
          </controller>
          <controller type='pci' index='3' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='3' port='0x8'/>
            <alias name='pci.3'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
          </controller>
          <controller type='pci' index='4' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='4' port='0x9'/>
            <alias name='pci.4'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
          </controller>
          <controller type='pci' index='5' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='5' port='0xa'/>
            <alias name='pci.5'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
          </controller>
          <controller type='pci' index='6' model='pcie-root-port'>
            <model name='pcie-root-port'/>
            <target chassis='6' port='0xb'/>
            <alias name='pci.6'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
          </controller>
          <controller type='sata' index='0'>
            <alias name='ide'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
          </controller>
          <interface type='bridge'>
            <mac address='52:54:00:01:02:03'/>
            <source bridge='virbr0'/>
            <target dev='vnet0'/>
            <model type='virtio'/>
            <alias name='net0'/>
            <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
          </interface>
          <interface type='ethernet'>
            <mac address='00:de:ad:00:00:01'/>
            <target dev='tap_vnet1' managed='no'/>
            <model type='virtio'/>
            <driver name='vhost' iommu='on' ats='on'/>
            <alias name='net1'/>
            <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </interface>
          <interface type='ethernet'>
            <mac address='00:de:ad:00:00:02'/>
            <target dev='tap_vnet2' managed='no'/>
            <model type='virtio'/>
            <driver name='vhost' iommu='on' ats='on'/>
            <alias name='net2'/>
            <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
          </interface>
          <serial type='pty'>
            <source path='/dev/pts/1'/>
            <target type='isa-serial' port='0'>
              <model name='isa-serial'/>
            </target>
            <alias name='serial0'/>
          </serial>
          <console type='pty' tty='/dev/pts/1'>
            <source path='/dev/pts/1'/>
            <target type='serial' port='0'/>
            <alias name='serial0'/>
          </console>
          <input type='mouse' bus='ps2'>
            <alias name='input0'/>
          </input>
          <input type='keyboard' bus='ps2'>
            <alias name='input1'/>
          </input>
          <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
            <listen type='address' address='0.0.0.0'/>
          </graphics>
          <audio id='1' type='none'/>
          <video>
            <model type='virtio' vram='16384' heads='1' primary='yes'/>
            <alias name='video0'/>
            <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
          </video>
          <watchdog model='itco' action='reset'>
            <alias name='watchdog0'/>
          </watchdog>
          <memballoon model='virtio'>
            <alias name='balloon0'/>
            <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
          </memballoon>
          <iommu model='intel'>
            <driver intremap='on' caching_mode='on' iotlb='on'/>
            <alias name='iommu0'/>
          </iommu>
        </devices>
        <seclabel type='dynamic' model='selinux' relabel='yes'>
          <label>system_u:system_r:svirt_t:s0:c774,c929</label>
          <imagelabel>system_u:object_r:svirt_image_t:s0:c774,c929</imagelabel>
        </seclabel>
        <seclabel type='dynamic' model='dac' relabel='yes'>
          <label>+107:+987</label>
          <imagelabel>+107:+987</imagelabel>
        </seclabel>
      </domain>
      

      #start testpmd forward packet inside guest
      dpdk-testpmd -l 0-2 -n 1 -a 0000:03:00.0 -a 0000:04:00.0 --socket-mem 1024 --legacy-mem – -i --forward-mode=io --burst=64 --rxd=512 --txd=512 --nb-cores=2 --rxq=1 --txq=1 --auto-start
      #start trex traffic on trex sender:

      Actual results:
      The testpmd does not receive and forward packet inside guest.
      EAL: Detected CPU lcores: 3txd=512 --nb-cores=2 --rxq=1 --txq=1 --auto-start
      EAL: Detected NUMA nodes: 1
      EAL: Detected shared linkage of DPDK
      EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
      EAL: Selected IOVA mode 'VA'
      EAL: VFIO support initialized
      EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1)
      EAL: Using IOMMU type 1 (Type 1)
      EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1)
      TELEMETRY: No legacy callbacks, legacy socket not created
      Interactive-mode selected
      Set io packet forwarding mode
      Auto-start selected
      Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
      testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
      testpmd: preferred mempool ops selected: ring_mp_mc
      Configuring Port 0 (socket 0)
      EAL: Error disabling MSI-X interrupts for fd 24
      Port 0: 00:DE:AD:00:00:01
      Configuring Port 1 (socket 0)
      EAL: Error disabling MSI-X interrupts for fd 28
      Port 1: 00:DE:AD:00:00:02
      Checking link statuses...
      Done
      Start automatic packet forwarding
      io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
      Logical Core 1 (socket 0) forwards packets on 1 streams:
      RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
      Logical Core 2 (socket 0) forwards packets on 1 streams:
      RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

      io packet forwarding packets/burst=64
      nb forwarding cores=2 - nb forwarding ports=2
      port 0: RX queue number: 1 Tx queue number: 1
      Rx offloads=0x0 Tx offloads=0x0
      RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0 wthresh=0
      RX Offloads=0x0
      TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0 wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
      port 1: RX queue number: 1 Tx queue number: 1
      Rx offloads=0x0 Tx offloads=0x0
      RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0 wthresh=0
      RX Offloads=0x0
      TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0 wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
      testpmd> step
      Command not found
      testpmd> stop
      Telling cores to stop...
      Waiting for lcores to finish...

      ---------------------- Forward statistics for port 0 ----------------------
      RX-packets: 4096 RX-dropped: 0 RX-total: 4096
      TX-packets: 4096 TX-dropped: 0 TX-total: 4096
      ----------------------------------------------------------------------------

      ---------------------- Forward statistics for port 1 ----------------------
      RX-packets: 4096 RX-dropped: 0 RX-total: 4096
      TX-packets: 4096 TX-dropped: 0 TX-total: 4096
      ----------------------------------------------------------------------------

      +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
      RX-packets: 8192 RX-dropped: 0 RX-total: 8192
      TX-packets: 8192 TX-dropped: 0 TX-total: 8192
      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

      Done.

      testpmd> set verbose 1
      following is the testpmd info:
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
      src=00:00:00:02:6F:01 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x0806 - length=60 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
      port 1/queue 0: received 2 packets
      src=00:00:00:02:6E:02 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x0806 - length=60 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
      src=00:00:00:02:6E:02 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x0806 - length=60 - nb_segs=1 - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
      testpmd>

      Expected results:
      Testpmd receive the packet and forward it well.

      Additional info:

              ovsdpdk-triage ovsdpdk triage
              tli@redhat.com Ting Li
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: