Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-44644

I/O error on a vhost-vdpa blk device using directsync cache mode [rhel-10.0.beta]

    • libblkio-1.5.0-2.el10
    • None
    • Moderate
    • rhel-sst-virtualization-storage
    • ssg_virtualization
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None

      (clone from RHEL-32878

      )

       

      Description of problem:
      Guest reports 'Buffer I/O error' after executing i/o checking like 'mkfs.ext4' on vhost-vdpa blk device using directsync cache mode.

      Version-Release number of selected component (if applicable):
      Red Hat Enterprise Linux release 10.0 Beta (Coughlan)
      6.9.0-0.rc2.1.el10.x86_64
      qemu-kvm-9.0.0-2.el10.x86_64
      seabios-bin-1.16.3-3.el10.noarch
      edk2-ovmf-20240214-1.el10.noarch
      libvirt-10.0.0-3.el10.x86_64
      virtio-win-prewhql-0.1-256.iso

      How reproducible:
      100%

      Steps to Reproduce:
      1. Prepare a simulated vhost-vdpa disk on host:
      modprobe vhost-vdpa
      modprobe vdpa-sim-blk
      vdpa dev add mgmtdev vdpasim_blk name blk0

      2. Define a guest with vhostvdpa disk and shared memory.

        <memoryBacking>
          <access mode='shared'/>
        </memoryBacking>
      
      ... ...
       <disk type='vhostvdpa' device='disk'>
            <driver name='qemu' type='raw' cache='directsync' io='threads' copy_on_read='on' discard='unmap' detect_zeroes='on'/>
            <source dev='/dev/vhost-vdpa-0'/>
            <target dev='vdb' bus='virtio'/>
            <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
          </disk>
        </devices>
      

      3. Start the vm
      virsh start <vm_name>

      4. check qemu command line:

       -add-fd set=0,fd=20,opaque=libvirt-1-storage-vdpa -blockdev {"driver":"virtio-blk-vhost-vdpa","path":"/dev/fdset/0","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap","cache":{"direct":true,"no-flush":false}} -blockdev {"node-name":"libvirt-1-format","read-only":false,"discard":"unmap","detect-zeroes":"on","cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"} -blockdev {"driver":"copy-on-read","node-name":"libvirt-CoR-vdb","file":"libvirt-1-format","discard":"unmap"} -device {"driver":"virtio-blk-pci","bus":"pci.7","addr":"0x0","drive":"libvirt-CoR-vdb","id":"virtio-disk1","write-cache":"off"}
      

      5. login to the console
      6. Get disk name in vm and execute 'mkfs.ext4 /dev/vdb'

      Actual results:
      It reports an error like below.

      [root@localhost ~]# lsblk 
      NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
      vda           252:0    0   10G  0 disk 
      ├─vda1        252:1    0  600M  0 part /boot/efi
      ├─vda2        252:2    0    1G  0 part /boot
      └─vda3        252:3    0  8.4G  0 part 
        ├─rhel-root 253:0    0  7.4G  0 lvm  /
        └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
      vdb           252:16   0  128M  0 disk 
      [root@localhost ~]# mkfs.ext4 /dev/vdb
      mke2fs 1.46.5 (30-Dec-2021)
      Discarding device blocks: done                            
      Creating filesystem with 131072 1k blocks and 32768 inodes
      Filesystem UUID: f7840b22-1e7a-4ca2-88c5-dbdf65add12d
      Superblock backups stored on blocks: 
      	8193, 24577, 40961, 57345, 73729
      
      Allocating group tables: done                            
      [   86.788732] I/O error, dev vdb, sector 262016 op 0x9:(WRITE_ZEROES) flags 0x8000800 phys_seg 0 prio class 2
      [   86.788953] I/O error, dev vdb, sector 262016 op 0x1:(WRITE) flags 0x800 phys_seg 16 prio class 2
      Writing inode tables: [   86.789104] I/O error, dev vdb, sector 262016 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 2
      [   86.793178] I/O error, dev vdb, sector 582 op 0x9:(WRITE_ZEROES) flags 0x8000800 phys_seg 0 prio class 2
      done                            
      [   86.794952] I/O error, dev vdb, sector 582 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 2
      [   86.795114] I/O error, dev vdb, sector 582 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 2
      [   86.800133] I/O error, dev vdb, sector 15968 op 0x9:(WRITE_ZEROES) flags 0x8000800 phys_seg 0 prio class 2
      [   86.800324] I/O error, dev vdb, sector 15968 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 2
      [   86.800637] I/O error, dev vdb, sector 15968 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 2
      Creating journal (4096 blocks): [   86.811485] I/O error, dev vdb, sector 98306 op 0x9:(WRITE_ZEROES) flags 0x8000800 phys_seg 0 prio class 2
      done
      Writing superblocks and filesystem accounting information: [   86.826349] Buffer I/O error on dev vdb, logical block 12288, lost async page write
      [   86.826354] Buffer I/O error on dev vdb, logical block 12289, lost async page write
      [   86.826355] Buffer I/O error on dev vdb, logical block 12290, lost async page write
      [   86.826356] Buffer I/O error on dev vdb, logical block 12291, lost async page write
      [   86.826358] Buffer I/O error on dev vdb, logical block 12292, lost async page write
      [   86.826358] Buffer I/O error on dev vdb, logical block 12293, lost async page write
      [   86.826359] Buffer I/O error on dev vdb, logical block 12294, lost async page write
      mkfs.ext4: [   86.826360] Buffer I/O error on dev vdb, logical block 12295, lost async page write
      [   86.826361] Buffer I/O error on dev vdb, logical block 12296, lost async page write
      Input/output error while writing out and closing file system
      [   86.826362] Buffer I/O error on dev vdb, logical block 12297, lost async page write
      [root@localhost ~]# 
      

      Expected results:
      The command should be executed without any errors.

      Additional info:
      There is no this problem for 'none' cache vhostvdpa disk.

              shajnocz@redhat.com Stefan Hajnoczi
              qingwangrh qing wang
              Stefan Hajnoczi Stefan Hajnoczi
              qing wang qing wang
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

                Created:
                Updated: