-
Bug
-
Resolution: Done
-
Undefined
-
None
-
None
-
None
-
False
-
-
False
-
-
Description of problem:
Version-Release number of selected component (if applicable):
[root@netqe24 ~]# ethtool -i enp4s0f0np0
driver: mlx5_core
version: 5.14.0-480.el9.x86_64
firmware-version: 22.39.1002 (MT_0000000359)
expansion-rom-version:
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
[root@netqe24 ~]# uname -r
5.14.0-480.el9.x86_64
[root@netqe24 ~]# rpm -qa|grep dpdk
dpdk-22.11-4.el9.x86_64
dpdk-tools-22.11-4.el9.x86_64
How reproducible:
Steps to Reproduce:
echo 0 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/sriov_numvfs
echo 0 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/sriov_numvfs
devlink dev eswitch set pci/0000:04:00.0 mode legacy
devlink dev eswitch set pci/0000:04:00.1 mode legacy
modprobe -r mlx5_vdpa
modprobe -r virtio_vdpa
modprobe -r vhost_vdpa
modprobe -r vdpa
modprobe vdpa
modprobe vhost_vdpa
modprobe mlx5_vdpa
echo 1 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/sriov_numvfs
echo 1 > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/sriov_numvfs
echo 0000:04:00.2 >/sys/bus/pci/drivers/mlx5_core/unbind
echo 0000:04:01.2 >/sys/bus/pci/drivers/mlx5_core/unbind
devlink dev eswitch set pci/0000:5f:00.0 mode switchdev
devlink dev eswitch set pci/0000:5f:00.1 mode switchdev
echo 0000:04:00.2 >/sys/bus/pci/drivers/mlx5_core/bind
echo 0000:04:01.2 >/sys/bus/pci/drivers/mlx5_core/bind
##single queue
vdpa dev add name vdpa0 mgmtdev pci/0000:04:00.2 mac 52:54:00:11:8f:ea
vdpa dev add name vdpa1 mgmtdev pci/0000:04:01.2 mac 52:54:00:11:8f:eb
ovs-vsctl add-br ovsbr1
ovs-vsctl add-port ovsbr1 ens5f0np0
ovs-vsctl add-port ovsbr1 ens5f0npf0vf0
ovs-vsctl add-br ovsbr2
ovs-vsctl add-port ovsbr2 ens5f1np1
ovs-vsctl add-port ovsbr2 ens5f1npf1vf0
ulimit -l unlimited
dpdk-testpmd --socket-mem=8192,0 -l 2,4,6 -d /usr/lib64/dpdk-pmds/librte_net_virtio.so.23 --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' -b 0000:04:00.2 -b 0000:04:01.2 -b 0000:04:00.0 -b 0000:04:00.1 – --disable-rss -i --rxq=1 --txq=1 --rxd=512 --txd=512
Actual results:
start testpmd failed with TIS allocation failure
[root@netqe24 ~]# dpdk-testpmd --socket-mem=8192,0 -l 2,4,6 -d /usr/lib64/dpdk-pmds/librte_net_virtio.so.23 --vdev 'virtio_user0,path=/dev/vhost-vdpa-0' --vdev 'virtio_user1,path=/dev/vhost-vdpa-1' -b 0000:04:00.2 -b 0000:04:01.2 -b 0000:04:00.0 -b 0000:04:00.1 – --disable-rss -i --rxq=1 --txq=1 --rxd=512 --txd=512 --nb-cores=2 --burst=64 --auto-start
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:06:00.0 (socket 0)
mlx5_common: DevX create TIS failed errno=121 status=0x5 syndrome=0x671120
mlx5_net: Failed to TIS 0 for bonding device mlx5_2.
mlx5_net: TIS allocation failure
mlx5_net: probe of PCI device 0000:06:00.0 aborted after encountering an error: Cannot allocate memory
mlx5_common: Failed to load driver mlx5_eth
EAL: Requested device 0000:06:00.0 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1013) device: 0000:06:00.1 (socket 0)
mlx5_common: DevX create TIS failed errno=121 status=0x5 syndrome=0x671120
mlx5_net: Failed to TIS 0 for bonding device mlx5_3.
mlx5_net: TIS allocation failure
mlx5_net: probe of PCI device 0000:06:00.1 aborted after encountering an error: Cannot allocate memory
mlx5_common: Failed to load driver mlx5_eth
EAL: Requested device 0000:06:00.1 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:82:00.0 (socket 1)
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:82:00.1 (socket 1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=163456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: EC:0D:9A:A0:1D:F4
Configuring Port 1 (socket 1)
Port 1: EC:0D:9A:A0:1D:F5
Configuring Port 2 (socket 0)
EAL: Registering with invalid input parameter
Port 2: 52:54:00:11:8F:EA
Configuring Port 3 (socket 0)
EAL: Registering with invalid input parameter
Port 3: 52:54:00:11:8F:EB
Checking link statuses...
Done
Error during enabling promiscuous mode for port 2: Operation not supported - ignore
Error during enabling promiscuous mode for port 3: Operation not supported - ignore
Start automatic packet forwarding
io packet forwarding - ports=4 - cores=2 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Logical Core 6 (socket 0) forwards packets on 2 streams:
RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
io packet forwarding packets/burst=64
nb forwarding cores=2 - nb forwarding ports=4
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=512 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=0
port 2: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 3: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=512 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd>
Expected results:
Start testpmd with the vhost-vdpa port on dpdk22.11-4 successfully.
On another cx6 card, its firmware is 22.36.1010, the testpmd can start successfully.