-
Bug
-
Resolution: Not a Bug
-
Undefined
-
None
-
rhel-9.5
-
None
-
No
-
Important
-
1
-
rhel-virt-storage
-
ssg_virtualization
-
3
-
False
-
False
-
-
None
-
virt-storage Sprint 4
-
None
-
Manual
-
Unspecified
-
Unspecified
-
Unspecified
-
-
All
-
None
What were you trying to do that didn't work?
Start a VM after I destroyed it. It has a virtiofs attached launched separately (not by libvirt) and identified throug its socket file
What is the impact of this issue to you?
Not sure, the test case is marked as Important
Please provide the package NVR for which the bug is seen:
virtiofsd-1.11.1-1.el9
How reproducible is this bug?:
100%
Steps to reproduce
- launch a virtiofs instance with socket file like
/usr/libexec/virtiofsd --socket-path=/var/tmp/vm001-vhost-fs.sock -o source=/var/tmp/mount_dir0
- Mount the directory in the VM
- Destroy the VM
- Start the VM
Expected results
The VM starts well, the directory can be mounted again
Actual results
The VM doesn't start
error: Failed to start domain 'avocado-vt-vm1' error: internal error: QEMU unexpectedly closed the monitor (vm='avocado-vt-vm1'): 2025-03-14T13:43:14.601450Z qemu-kvm: -chardev socket,id=chr-vu-fs0,path=/var/tmp/vm001-vhost-fs.sock: Failed to connect to '/var/tmp/vm001-vhost-fs.sock': Connection refused
Additional information
- The virtiofsd process finishes gracefully but I expect the process to keep running because of the test requirement. When Libvirt manages the virtiofsd instance the steps will pass.
# /usr/libexec/virtiofsd --socket-path=/var/tmp/vm001-vhost-fs.sock -o source=/var/tmp/mount_dir0 [2025-03-14T13:31:13Z WARN virtiofsd] Use of deprecated option format '-o': Please specify options without it (e.g., '--cache auto' instead of '-o cache=auto') [2025-03-14T13:31:13Z INFO virtiofsd] Waiting for vhost-user socket connection... [2025-03-14T13:31:55Z INFO virtiofsd] Client connected, servicing requests [2025-03-14T13:32:59Z INFO virtiofsd] Client disconnected, shutting down
- Reproduces on x86_64 with RHEL 9.6.
- Same behavior reproduces when shutting down gracefully (but not when rebooting) or with managedsave; in case of the latter there's additional output:
[2025-03-22T15:39:02Z WARN virtiofsd::vhost_user] Front-end did not announce migration to begin, so we failed to prepare for it; collecting data now. If you are doing a snapshot, that is OK; otherwise, migration downtime may be prolonged. [2025-03-22T15:39:06Z INFO virtiofsd] Client disconnected, shutting down
- File system xml
<filesystem type="mount"> <driver type="virtiofs" queue="1024"/> <source socket="/var/tmp/vm001-vhost-fs.sock"/> <target dir="mount_tag0"/> <alias name="fs0"/> </filesystem>
- qemu cmdline:
*/usr/libexec/qemu-kvm -name guest=avocado-vt-vm1,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-avocado-vt-vm1/master-key.aes"} -machine s390-ccw-virtio-rhel9.4.0,usb=off,dump-guest-core=off,memory-backend=s390.ram -accel kvm -cpu gen15a-base,aen=on,vxpdeh=on,aefsi=on,diag318=on,csske=on,mepoch=on,msa9=on,msa8=on,msa7=on,msa6=on,msa5=on,msa4=on,msa3=on,msa2=on,msa1=on,sthyi=on,edat=on,ri=on,deflate=on,edat2=on,etoken=on,vx=on,ipter=on,mepochptff=on,ap=on,vxeh=on,vxpd=on,esop=on,msa9_pckmo=on,vxeh2=on,esort=on,apft=on,els=on,iep=on,apqci=on,cte=on,ais=on,bpb=on,gs=on,ppa15=on,zpci=on,sea_esop2=on,te=on -m size=1048576k -object {"qom-type":"memory-backend-file","id":"s390.ram","mem-path":"/var/lib/libvirt/qemu/ram/2-avocado-vt-vm1/s390.ram","share":true,"x-use-canonical-path-for-ramblock-id":false,"size":1073741824} -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid a291f472-e02d-4a16-80f7-31681dc3b421 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=24,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device {"driver":"virtio-scsi-ccw","id":"scsi0","devno":"fe.0.0003"} -device {"driver":"virtio-serial-ccw","id":"virtio-serial0","devno":"fe.0.0004"} -blockdev {"driver":"file","filename":"/var/lib/avocado/data/avocado-vt/images/jeos-27-s390x.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device {"driver":"virtio-blk-ccw","devno":"fe.0.0000","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1} -chardev socket,id=chr-vu-fs0,path=/var/tmp/vm001-vhost-fs.sock -device {"driver":"vhost-user-fs-ccw","id":"fs0","chardev":"chr-vu-fs0","queue-size":1024,"tag":"mount_tag0","devno":"fe.0.0009"} -netdev {"type":"tap","fd":"25","vhost":true,"vhostfd":"27","id":"hostnet0"} -device {"driver":"virtio-net-ccw","netdev":"hostnet0","id":"net0","mac":"52:54:00:22:6b:be","devno":"fe.0.0001"} -chardev pty,id=charserial0 -device {"driver":"sclpconsole","chardev":"charserial0","id":"serial0"} -chardev socket,id=charchannel0,fd=23,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"} -chardev pty,id=charconsole1 -device {"driver":"virtconsole","chardev":"charconsole1","id":"console1"} -device {"driver":"virtio-keyboard-ccw","id":"input0","devno":"fe.0.0005"} -device {"driver":"virtio-mouse-ccw","id":"input1","devno":"fe.0.0006"} -audiodev {"id":"audio1","driver":"none"} -vnc 127.0.0.1:0,audiodev=audio1 -device {"driver":"virtio-gpu-ccw","id":"video0","max_outputs":1,"devno":"fe.0.0002"} -device {"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0007"} -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -device {"driver":"vhost-vsock-ccw","id":"vsock0","guest-cid":3,"vhostfd":"20","devno":"fe.0.0008"} -msg timestamp=on