Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-116966

Optimize QEMU ioeventfd POLL_ADD + read(2)

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • Icon: Story Story
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • None
    • qemu-kvm / Storage
    • None
    • rhel-virt-storage
    • ssg_virtualization
    • 5
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • virt-storage Sprint 6, Virt storage Sprint8 2025-08, Planning backlog, VirtStorage Sprint9 2025-08,09
    • None
    • None
    • None

      Goal

      • I want to take advantage of io_uring performance optimizations for storage in QEMU. Modify the ioeventfd code in QEMU to use IORING_OP_READ instead of IORING_OP_POLL_ADD + read(2) because it is faster to do it in a single read operation.
      • A prototype demonstrated that there is a modest improvement in IOPS on <10 microsecond NVMe drives:
      Operation BS QD IOPS   Change
      randread  4k  1 112739 +1.7%
      randread  4k 64 504866 +0.068%
      randwrite 4k  1 110171 +5.7%
      randwrite 4k 64 516398 +5.3%
      
      (Full data here: https://gitlab.com/stefanha/virt-playbooks/-/commit/46161cd6c877759bd8ade826f953b7fd520abab7)

      Acceptance criteria

      A list of verification conditions, successful functional tests, or expected outcomes in order to declare this story/task successfully completed.

      • fio randread and randwrite with blocksize 4 KB and iodepths 1 and 64 does not degrade on a virtio-blk disk backed by a host NVMe block device. The guest has 4 vCPUs and the virtio-blk disks are configured with 4 IOThreads.

              virt-maint virt-maint
              shajnocz@redhat.com Stefan Hajnoczi
              virt-maint virt-maint
              Tingting Mao Tingting Mao
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: