-
Bug
-
Resolution: Unresolved
-
Normal
-
rhel-9.2.0
-
None
-
Moderate
-
rhel-sst-virtualization-networking
-
ssg_virtualization
-
1
-
-
QE ack
-
False
-
None
-
None
-
None
-
None
-
Known Issue
-
-
Done
-
-
Unspecified
-
Unspecified
-
None
+++ This bug was initially created as a clone of Bug #1792683 +++
Description of problem:
Boot guest with vhost-user packed=on. Then re-connect vhost-user by re-starting dpdk's testpmd, testpmd in guest fails to recover receiving packets.
Version-Release number of selected component (if applicable):
5.12.0-0.rc5.180.el9.x86_64
qemu-kvm-5.2.0-11.el9.x86_64
How reproducible:
100%
Steps to Reproduce:
1. Boot dpdk's testpmd as vhost-user client
- cat pvp.sh
/usr/bin/dpdk-testpmd \
-l 2,4,6,8,10 \
--socket-mem 1024,1024 \
-n 4 \
--vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,client=1,iommu-support=1 \
--vdev net_vhost1,iface=/tmp/vhost-user2,queues=1,client=1,iommu-support=1 \
--block 0000:3b:00.0 --block 0000:3b:00.1 \
-d /usr/lib64/librte_net_vhost.so \- \
--portmask=f \
-i \
--rxd=512 --txd=512 \
--rxq=1 --txq=1 \
--nb-cores=4 \
--forward-mode=io
- \
- sh pvp.sh
testpmd> set portlist 0,2,1,3
testpmd>
Port 0: link state change event
Port 1: link state change event
testpmd>
testpmd> start
2. Boot guest with vhost-user packed=on. Full XML is attached.
<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>rhel9.0</name>
...
<devices>
...
<interface type='bridge'>
<mac address='88:66:da:5f:dd:01'/>
<source bridge='switch'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='88:66:da:5f:dd:12'/>
<source type='unix' path='/tmp/vhost-user1' mode='server'/>
<model type='virtio'/>
<driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='88:66:da:5f:dd:13'/>
<source type='unix' path='/tmp/vhost-user2' mode='server'/>
<model type='virtio'/>
<driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</interface>
...
</devices>
...
<qemu:commandline>
<qemu:arg value='-set'/>
<qemu:arg value='device.net1.packed=on'/>
<qemu:arg value='-set'/>
<qemu:arg value='device.net2.packed=on'/>
</qemu:commandline>
</domain>
3. Start testpmd in guest and start Moongen in another host, guest can receive packets.
testpmd> show port stats all
NIC statistics for port 0
RX-packets: 2742484 RX-missed: 0 RX-bytes: 164549040
RX-errors: 0
RX-nombuf: 0
TX-packets: 2738731 TX-errors: 0 TX-bytes: 164323860
Throughput (since last show)
Rx-pps: 70846 Rx-bps: 34006104
Tx-pps: 70772 Tx-bps: 33971032
NIC statistics for port 1
RX-packets: 2740010 RX-missed: 0 RX-bytes: 164400600
RX-errors: 0
RX-nombuf: 0
TX-packets: 2741234 TX-errors: 0 TX-bytes: 164474040
Throughput (since last show)
Rx-pps: 70771 Rx-bps: 33970192
Tx-pps: 70853 Tx-bps: 34009640
4. Re-connect vhost-user by re-start dpdk's testpmd in host
- pkill testpmd
- sh pvp.sh
5. Check testpmd in guest, the packets receiving can not recover.
testpmd> show port stats all
NIC statistics for port 0
RX-packets: 3034162 RX-missed: 0 RX-bytes: 182049720
RX-errors: 0
RX-nombuf: 0
TX-packets: 3030089 TX-errors: 0 TX-bytes: 181805340
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
NIC statistics for port 1
RX-packets: 3031364 RX-missed: 0 RX-bytes: 181881840
RX-errors: 0
RX-nombuf: 0
TX-packets: 3032908 TX-errors: 0 TX-bytes: 181974480
Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
Actual results:
dpdk packets receiving can not recover after vhost-user reconnect.
Expected results:
dpdk packets receiving should recover well after vhost-user reconnect.
Additional info:
1. Without packed=on, dpdk packets receiving can be recovered very well.