Uploaded image for project: 'Fast Datapath Product'
  1. Fast Datapath Product
  2. FDP-317

[ice driver]: The testpmd start failed inside container when use cross numa with --cpuset-mems=1

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Undefined Undefined
    • None
    • None
    • dpdk
    • None
    • 5
    • False
    • Hide

      None

      Show
      None
    • False
    • rhel-9
    • None
    • rhel-net-ovs-dpdk
    • ssg_networking
    • OVS/DPDK - FDP-25.D
    • 1

      Description of problem:

      Version-Release number of selected component (if applicable):

      [root@dell-per740-57 ~]# rpm -qa|grep dpdk
      dpdk-22.11-4.el9.x86_64
      dpdk-tools-22.11-4.el9.x86_64

      [root@dell-per740-57 ~]# uname -r
      5.14.0-284.39.1.el9_2.x86_64

       

      How reproducible:

      Steps to Reproduce:

      1. create vf for two pfs

      [root@dell-per740-57 ~]# ip a

      10: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
          link/ether b4:96:91:a5:d1:0c brd ff:ff:ff:ff:ff:ff
          vf 0     link/ether 6a:f5:25:db:33:d8 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
          altname enp59s0f0
      11: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
          link/ether b4:96:91:a5:d1:0d brd ff:ff:ff:ff:ff:ff
          vf 0     link/ether 8e:44:73:9b:01:ed brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
          altname enp59s0f1

      2.  start testpmd with two vfs inside container, with  option --cpuset-mems=1

      [root@dell-per740-57 ~]# podman run -i -t --privileged --cpuset-mems=1 --cpuset-cpus=3,5,7 -v /dev/vfio/vfio:/dev/vfio/vfio -v /dev/hugepages:/dev/hugepages 7aa415186356 dpdk-testpmd -l 3,5,7 -n 4 -m 1024 – -i --forward-mode=mac --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9200 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start

      3.  start testpmd with two vfs with --cpuset-mems=0,1

      [root@dell-per740-57 ~]# podman run -i -t --privileged --cpuset-mems=0,1 --cpuset-cpus=3,5,7 -v /dev/vfio/vfio:/dev/vfio/vfio -v /dev/hugepages:/dev/hugepages 7aa415186356 dpdk-testpmd -l 3,5,7 -n 4 -m 1024 – -i --forward-mode=mac --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9200 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start

      Actual results:

      step 2 run failed with set_mempolicy: Invalid argument

      [root@dell-per740-57 ~]# podman run -i -t --privileged --cpuset-mems=1 --cpuset-cpus=3,5,7 -v /dev/vfio/vfio:/dev/vfio/vfio -v /dev/hugepages:/dev/hugepages 7aa415186356 dpdk-testpmd -l 3,5,7 -n 4 -m 1024 – -i --forward-mode=mac --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9200 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start
      EAL: Detected CPU lcores: 48
      EAL: Detected NUMA nodes: 2
      EAL: Detected shared linkage of DPDK
      EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
      EAL: Selected IOVA mode 'VA'
      EAL: No available 2048 kB hugepages reported
      EAL: VFIO support initialized
      EAL: Using IOMMU type 1 (Type 1)
      EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:3b:01.0 (socket 0)
      set_mempolicy: Invalid argument
      EAL: Releasing PCI mapped resource for 0000:3b:01.0
      EAL: Calling pci_unmap_resource for 0000:3b:01.0 at 0x2200000000
      EAL: Calling pci_unmap_resource for 0000:3b:01.0 at 0x2200020000
      EAL: Requested device 0000:3b:01.0 cannot be used
      EAL: Using IOMMU type 1 (Type 1)
      EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:3b:11.0 (socket 0)
      set_mempolicy: Invalid argument
      EAL: Releasing PCI mapped resource for 0000:3b:11.0
      EAL: Calling pci_unmap_resource for 0000:3b:11.0 at 0x2200024000
      EAL: Calling pci_unmap_resource for 0000:3b:11.0 at 0x2200044000
      EAL: Requested device 0000:3b:11.0 cannot be used
      TELEMETRY: No legacy callbacks, legacy socket not created
      testpmd: No probed ethernet devices
      Interactive-mode selected
      Set mac packet forwarding mode
      Fail: input rxq (1) can't be greater than max_rx_queues (0) of port 0
      EAL: Error - exiting with code: 1
        Cause: rxq 1 invalid - must be >= 0 && <= 0

       

      step 3 run successfully.

      [root@dell-per740-57 ~]# podman run -i -t --privileged --cpuset-mems=0,1 --cpuset-cpus=3,5,7 -v /dev/vfio/vfio:/dev/vfio/vfio -v /dev/hugepages:/dev/hugepages 7aa415186356 dpdk-testpmd -l 3,5,7 -n 4 -m 1024 – -i --no-numa --socket-num=1 --forward-mode=mac --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --burst=32 --rxd=4096 --txd=4096 --max-pkt-len=9200 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start
      EAL: Detected CPU lcores: 48
      EAL: Detected NUMA nodes: 2
      EAL: Detected shared linkage of DPDK
      EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
      EAL: Selected IOVA mode 'VA'
      EAL: No available 2048 kB hugepages reported
      EAL: VFIO support initialized
      EAL: Using IOMMU type 1 (Type 1)
      EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:3b:01.0 (socket 0)
      EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:3b:11.0 (socket 0)
      TELEMETRY: No legacy callbacks, legacy socket not created
      Interactive-mode selected
      Set mac packet forwarding mode
      Auto-start selected
      testpmd: create a new mbuf pool <mb_pool_1>: n=180224, size=9728, socket=1
      testpmd: preferred mempool ops selected: ring_mp_mc
      Configuring Port 0 (socket 1)
      iavf_configure_queues(): request RXDID[22] in Queue[0]

      Port 0: link state change event

      Port 0: link state change event
      Port 0: 6A:F5:25:DB:33:D8
      Configuring Port 1 (socket 1)
      iavf_configure_queues(): request RXDID[22] in Queue[0]

      Port 1: link state change event

      Port 1: link state change event
      Port 1: 8E:44:73:9B:01:ED
      Checking link statuses...
      Done
      Start automatic packet forwarding
      mac packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support disabled, MP allocation mode: native
      Logical Core 5 (socket 1) forwards packets on 1 streams:
        RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=00:00:00:00:00:02
      Logical Core 7 (socket 1) forwards packets on 1 streams:
        RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=00:00:00:00:00:01

        mac packet forwarding packets/burst=32
        nb forwarding cores=2 - nb forwarding ports=2
        port 0: RX queue number: 1 Tx queue number: 1
          Rx offloads=0x0 Tx offloads=0x10000
          RX queue: 0
            RX desc=4096 - RX free threshold=32
            RX threshold registers: pthresh=0 hthresh=0  wthresh=0
            RX Offloads=0x0
          TX queue: 0
            TX desc=4096 - TX free threshold=32
            TX threshold registers: pthresh=0 hthresh=0  wthresh=0
            TX offloads=0x10000 - TX RS bit threshold=32
        port 1: RX queue number: 1 Tx queue number: 1
          Rx offloads=0x0 Tx offloads=0x10000
          RX queue: 0
            RX desc=4096 - RX free threshold=32
            RX threshold registers: pthresh=0 hthresh=0  wthresh=0
            RX Offloads=0x0
          TX queue: 0
            TX desc=4096 - TX free threshold=32
            TX threshold registers: pthresh=0 hthresh=0  wthresh=0
            TX offloads=0x10000 - TX RS bit threshold=32
      testpmd> 

       

      Expected results:

      testpmd start successfully inside container when use cross numa with --cpuset-mems=1 

       

      Additional info:

              rhn-support-ktraynor Kevin Traynor
              tli@redhat.com Ting Li
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: