Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-26629

The Disk Performance with iothread multiqueue Is Worse Than Without

    • Major
    • sst_virtualization_storage
    • ssg_virtualization
    • QE ack
    • False
    • Hide

      None

      Show
      None
    • Yes
    • Red Hat Enterprise Linux
    • Known Issue
    • Hide
      Cause (the user action or circumstances that trigger the bug):
      Consequence (what the user experience is when the bug occurs):
      Workaround (if available):
      Result (mandatory if the workaround does not solve the problem completely):
      Show
      Cause (the user action or circumstances that trigger the bug): Consequence (what the user experience is when the bug occurs): Workaround (if available): Result (mandatory if the workaround does not solve the problem completely):
    • Proposed
    • x86_64
    • Linux

      What were you trying to do that didn't work?
      Using multiple iothreads (iothread-vq-mapping) on disk, the IO performance is worse than without iothread.
      and more iothreads not get better performance.

      Please provide the package NVR for which bug is seen:
      Red Hat Enterprise Linux release 9.4 Beta (Plow)
      5.14.0-416.el9.x86_64
      qemu-kvm-8.2.0-6.el9.x86_64
      seabios-bin-1.16.3-2.el9.noarch
      edk2-ovmf-20231122-5.el9.noarch

      How reproducible:
      100%

      Steps to reproduce
      1.Boot VM with 4 data disk
      present disks with 0 iothread,1 iothread,2 iothreads and 4 iothreads
      
      /usr/libexec/qemu-kvm \
      -S  \
      -name 'avocado-vt-vm1'  \
      -sandbox on \
      -machine pc,memory-backend=mem-machine_mem  \
      -nodefaults \
      -device '\{"driver": "VGA", "bus": "pci.0", "addr": "0x2"}' \
      -m 8192 \
      -object '\{"size": 8589934592, "id": "mem-machine_mem", "qom-type": "memory-backend-ram"}'  \
      -smp 16,maxcpus=16,cores=8,threads=1,dies=1,sockets=2  \
      -cpu 'Icelake-Server-noTSX',+kvm_pv_unhalt \
      \
      -device '\{"driver": "ich9-usb-ehci1", "id": "usb1", "addr": "0x1d.0x7", "multifunction": true, "bus": "pci.0"}' \
      -device '\{"driver": "ich9-usb-uhci1", "id": "usb1.0", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x0", "firstport": 0, "bus": "pci.0"}' \
      -device '\{"driver": "ich9-usb-uhci2", "id": "usb1.1", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x2", "firstport": 2, "bus": "pci.0"}' \
      -device '\{"driver": "ich9-usb-uhci3", "id": "usb1.2", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x4", "firstport": 4, "bus": "pci.0"}' \
      -device '\{"driver": "usb-tablet", "id": "usb-tablet1", "bus": "usb1.0", "port": "1"}' \
      -object '\{"qom-type": "iothread", "id": "t1"}' \
      -object '\{"qom-type": "iothread", "id": "t2"}' \
      -object '\{"qom-type": "iothread", "id": "t3"}' \
      -object '\{"qom-type": "iothread", "id": "t4"}' \
      -blockdev '\{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel940-64-virtio.qcow2", "cache": {"direct": true, "no-flush": false}}' \
      -blockdev '\{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
      -device '\{"driver": "virtio-blk-pci", "id": "image1", "drive": "drive_image1", "bootindex": 0, "write-cache": "on", "bus": "pci.0", "addr": "0x3"}' \
      -blockdev '\{"node-name": "file_stg0", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "native", "filename": "/home/fio/stg0.qcow2", "cache": {"direct": true, "no-flush": false}}' \
      -blockdev '\{"node-name": "drive_stg0", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg0"}' \
      -device '\{"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "bootindex": 1, "write-cache": "on", "serial": "stg0", "bus": "pci.0", "addr": "0x4"}' \
      -blockdev '\{"node-name": "file_stg1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "native", "filename": "/home/fio/stg1.qcow2", "cache": {"direct": true, "no-flush": false}}' \
      -blockdev '\{"node-name": "drive_stg1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg1"}' \
      -device '\{"driver": "virtio-blk-pci", "id": "stg1", "drive": "drive_stg1", "bootindex": 2, "write-cache": "on", "serial": "stg1", "bus": "pci.0", "addr": "0x5", "iothread-vq-mapping": [{"iothread": "t1"}]}' \
      -blockdev '\{"node-name": "file_stg2", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "native", "filename": "/home/fio/stg2.qcow2", "cache": {"direct": true, "no-flush": false}}' \
      -blockdev '\{"node-name": "drive_stg2", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg2"}' \
      -device '\{"driver": "virtio-blk-pci", "id": "stg2", "drive": "drive_stg2", "bootindex": 3, "write-cache": "on", "serial": "stg2", "bus": "pci.0", "addr": "0x6", "iothread-vq-mapping": [{"iothread": "t1"}, \{"iothread": "t2"}]}' \
      -blockdev '\{"node-name": "file_stg3", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "native", "filename": "/home/fio/stg3.qcow2", "cache": {"direct": true, "no-flush": false}}' \
      -blockdev '\{"node-name": "drive_stg3", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg3"}' \
      -device '\{"driver": "virtio-blk-pci", "id": "stg3", "drive": "drive_stg3", "bootindex": 4, "write-cache": "on", "serial": "stg3", "bus": "pci.0", "addr": "0x7", "iothread-vq-mapping": [{"iothread": "t1"}, \{"iothread": "t2"}, \{"iothread": "t3"}, \{"iothread": "t4"}]}' \
      -device '\{"driver": "virtio-net-pci", "mac": "9a:1f:5a:98:ab:db", "id": "idSJRi2K", "netdev": "idHugO6c", "bus": "pci.0", "addr": "0x8"}' \
      -netdev  '\{"id": "idHugO6c", "type": "tap", "vhost": true"}'  \
      -vnc :0  \
      -rtc base=utc,clock=host,driftfix=slew  \
      -boot menu=off,order=cdn,once=c,strict=off \
      -enable-kvm
      
      2. execute FIO one by one on those disks 
      fio --runtime=30 --group_reporting --cpus_allowed=0-7 --cpus_allowed_policy=split --numjobs=8 --direct=1 --filename=/dev/vdb --ioengine=libaio --size=3G --randrepeat=1 --bs=4k --output-format=json --stonewall --name=randrw-4k-8 --rw=randrw --iodepth=8
      
      (vdc,vdd,vde)
      
      3. run steps 2 10 times
      
      Expected results
      The disk with iothread should get better performance than without iothread
      
      Actual results
      performance with iothread is worse than without
      
      Test result
      
      Disk-compare IOPS IOPS
      -----------------------------------------------------------------------------------
      stg0-stg1: randrw-4k-8 : 170085 150955 (ratio: 112.7%) (gap: -11.2%)
      stg1-stg2: randrw-4k-8 : 150955 150424 (ratio: 100.4%) (gap: -0.4 %)
      stg2-stg3: randrw-4k-8 : 150424 151783 (ratio: 99.1 %) (gap: 0.9 %)
      
       
      
      

            shajnocz@redhat.com Stefan Hajnoczi
            qingwangrh qing wang
            virt-maint virt-maint
            qing wang qing wang
            Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated: