-
Bug
-
Resolution: Unresolved
-
Normal
-
rhel-9.6
-
None
-
No
-
Important
-
rhel-sst-virtualization-storage
-
15
-
17
-
3
-
False
-
-
None
-
None
-
None
Please provide the package NVR for which bug is seen:
virtiofsd-1.11.1-1.el10.x86_64
qemu-kvm-9.1.0-3.el10.x86_64
libvirt-10.8.0-2.el10.x86_64
How reproducible:
100%
Steps to reproduce:
1. Prepare a guest with the following xml # virsh dumpxml lizhu --xpath //filesystem <filesystem type="mount"> <driver type="virtiofs" queue="1024"/> <source socket="/vm001-vhost-fs.sock"/> <target dir="mount_tag1"/> <address type="pci" domain="0x0000" bus="0x09" slot="0x00" function="0x0"/> </filesystem>
2. Prepare the env on both source and target host
2.1. set virtd_exec_t on the virtiofsd binary:
#chcon -t virtd_exec_t /usr/libexec/virtiofsd
2.2.Create the shared dir:
#mkdir -p /mnt/nfs/fs/vm001
2.3.run virtiofsd using systemd-run:
#systemd-run /usr/libexec/virtiofsd --socket-path=/vm001-vhost-fs.sock -o source=/mnt/nfs/fs/vm001
2.4.relabel the created socket
#chcon -t svirt_image_t /vm001-vhost-fs.sock
2.5.Change ownership of the socket file:
#chown qemu:qemu /vm001-vhost-fs.sock
3. Start the guest
# virsh start lizhu Domain 'lizhu' started
4. Check the guest qemu cmdline
# ps aux |grep qemu ... -chardev socket,id=chr-vu-fs0,path=/vm001-vhost-fs.sock -device {"driver":"vhost-user-fs-pci","id":"fs0","chardev":"chr-vu-fs0","queue-size":1024,"tag":"mount_tag1","bus":"pci.9","addr":"0x0"} ...
5. In terminal 1, try to migrate the guest
6. In terminal 2, kill virtiofsd immediately after migration began.
# pkill virtiofsd
7. Check migration state
# virsh migrate lizhu qemu+ssh://$target_hostname/system --verbose --live Migration: [96.81 %]error: operation failed: domain is not running
8. Check the guest state
# virsh domstate lizhu --reason shut off (crashed)
9. Check the guest log
2024-09-27 07:34:24.953+0000: initiating migration # cat /var/log/libvirt/qemu/lizhu.log ... 2024-09-27T07:34:26.995387Z qemu-kvm: Unexpected end-of-file before all data were read 2024-09-27T07:34:31.710454Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.710472Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.710478Z qemu-kvm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) 2024-09-27T07:34:31.710485Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.710489Z qemu-kvm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) 2024-09-27T07:34:31.710529Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.710534Z qemu-kvm: vhost_set_vring_call failed 22 2024-09-27T07:34:31.710538Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.710542Z qemu-kvm: vhost_set_vring_call failed 22 2024-09-27T07:34:31.996295Z qemu-kvm: Failed to set msg fds. 2024-09-27T07:34:31.996321Z qemu-kvm: Error saving back-end state of virtio-user-fs device /machine/peripheral/fs0/virtio-backend (tag: "mount_tag1"): Failed to initiate state transfer: Failed to send SET_DEVICE_STATE_FD message: Invalid argument 2024-09-27 07:34:32.304+0000: shutting down, reason=crashed
Expected result:
qemu should not crashed
Addtional info:
If I just start the guest, and kill virtiofsd during writing to virtiofs, guest will not be crashed, and work as normal except the filesystem under virtiofs mount point.
1. Prepare the guest with externally launched virtiofs device
2. Mount the device in guest
3. Write to virtiofs filesystem in terminal 1
[Guest OS]# dd if=/dev/zero of=/mnt/testfile count=10000000
4. Kill virtiofsd immediately in host after execute Step 3
5. Check the guest status
- virsh list --all
Id Name State
-------------------------
14 clone running
6. Login the guest, write something to other filesystem files
[Guest OS]# echo hello > test [Guest OS]# echo $? 0
- clones
-
RHEL-63051 qemu crashed after killed virtiofsd during migration
- In Progress