-
Bug
-
Resolution: Unresolved
-
Normal
-
rhel-9.5
-
libvirt-10.8.0-1.el9
-
No
-
Moderate
-
rhel-sst-virtualization-networking
-
ssg_virtualization
-
11
-
3
-
Dev ack
-
False
-
-
None
-
None
-
Pass
-
Manual
-
-
10.7.0
-
None
What were you trying to do that didn't work?
Attach a vdpa device with acpi index=1 to vm, then attach another vdpa device with acpi index=1, attaching failed, but the opened fd on vdpa device was not closed.
Please provide the package NVR for which bug is seen:
libvirt-10.5.0-5.el9.x86_64
qemu-kvm-9.0.0-7.el9.x86_64
How reproducible:
100%
Steps to reproduce
- On a host with Mallenox card, create vdpa devices.
# ll /dev/vhost-vdpa* crw-------. 1 root root 235, 0 Jul 22 23:25 /dev/vhost-vdpa-0 crw-------. 1 root root 235, 1 Jul 22 23:25 /dev/vhost-vdpa-1 crw-------. 1 root root 235, 2 Jul 22 23:25 /dev/vhost-vdpa-2 crw-------. 1 root root 235, 3 Jul 22 23:25 /dev/vhost-vdpa-3
- Start a guest
- Attach a vdpa device with acpi index=1
# cat vdpa1.xml <interface type='vdpa'> <mac address='52:54:00:cb:45:11'/> <source dev='/dev/vhost-vdpa-1'/> <acpi index='1'/> </interface> # virsh attach-device <guest> vdpa1.xml
- Attach another vdpa device with acpi index=1
# cat vdpa2.xml <interface type='vdpa'> <mac address='52:54:00:cb:45:22'/> <source dev='/dev/vhost-vdpa-2'/> <acpi index='1'/> </interface> # virsh attach-device <guest> vdpa2.xml error: Failed to attach device from vdpa2.xml error: internal error: unable to execute QEMU command 'device_add': a PCI device with acpi-index = 1 already exist
- Change acpi index to '2', attach the second vdpa device again:
# virsh attach-device <guest> vdpa2.xml error: Failed to attach device from vdpa2.xml error: Unable to open '/dev/vhost-vdpa-2' for vdpa device: Device or resource busy
- Check the opened fd on file /dev/vhost-vdpa-2
# lsof /dev/vhost-vdpa-2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME qemu-kvm 57347 qemu 150u CHR 235,2 0t0 1106 /dev/vhost-vdpa-2
- Here are the qemu monitor commands:
2024-07-24 07:39:27.584+0000: 56129: info : qemuMonitorSend:838 : QEMU_MONITOR_SEND_MSG: mon=0x7f38980672f0 msg={"execute":"query-fdsets","id":"libvirt-460"} 2024-07-24 07:39:27.585+0000: 60535: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7f38980672f0 reply={"return": [{"fds": [{"fd": 150, "opaque": "net1-vdpa"}], "fdset-id": 1}, {"fds": [{"fd": 6, "opaque": "net0-vdpa"}], "fdset-id": 0}], "id": "libvirt-460"} 2024-07-24 07:39:27.585+0000: 56129: info : qemuMonitorSend:838 : QEMU_MONITOR_SEND_MSG: mon=0x7f38980672f0 msg={"execute":"add-fd","arguments":{"opaque":"net2-vdpa","fdset-id":3},"id":"libvirt-461"} 2024-07-24 07:39:27.586+0000: 60535: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7f38980672f0 reply={"return": {"fd": 172, "fdset-id": 3}, "id": "libvirt-461"} 2024-07-24 07:39:27.586+0000: 56129: info : qemuMonitorSend:838 : QEMU_MONITOR_SEND_MSG: mon=0x7f38980672f0 msg={"execute":"netdev_add","arguments":{"type":"vhost-vdpa","vhostdev":"/dev/fdset/3","id":"hostnet2"},"id":"libvirt-462"} 2024-07-24 07:39:27.590+0000: 60535: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7f38980672f0 reply={"return": {}, "id": "libvirt-462"} 2024-07-24 07:39:27.590+0000: 56129: info : qemuMonitorSend:838 : QEMU_MONITOR_SEND_MSG: mon=0x7f38980672f0 msg={"execute":"device_add","arguments":{"driver":"virtio-net-pci","netdev":"hostnet2","id":"net2","mac":"52:54:00:cb:45:22","bus":"pci.8","addr":"0x0","acpi-index":1},"id":"libvirt-463"} 2024-07-24 07:39:27.645+0000: 60535: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7f38980672f0 reply={"id": "libvirt-463", "error": {"class": "GenericError", "desc": "a PCI device with acpi-index = 1 already exist"}} 2024-07-24 07:39:27.645+0000: 56129: info : qemuMonitorSend:838 : QEMU_MONITOR_SEND_MSG: mon=0x7f38980672f0 msg={"execute":"netdev_del","arguments":{"id":"hostnet2"},"id":"libvirt-464"} 2024-07-24 07:39:27.647+0000: 60535: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7f38980672f0 reply={"return": {}, "id": "libvirt-464"}
Expected results
In step4, when attaching failed, opened fd on file /dev/vhost-vdpa-2 should be closed.
Actual results
In step4, when attaching failed, opened fd on file /dev/vhost-vdpa-2 is not closed.
- is blocked by
-
RHEL-50574 Rebase libvirt in RHEL-9.6.0
- Integration
- links to
-
RHBA-2024:140248 libvirt bug fix and enhancement update