-
Story
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
None
-
4
-
rhel-virt-storage
-
ssg_virtualization
-
5
-
False
-
False
-
-
None
-
virt-storage Sprint 6, Virt storage Sprint8 2025-08, Planning backlog, VirtStorage Sprint9 2025-08,09
-
None
-
None
-
None
Goal
- I want to take advantage of io_uring performance optimizations for storage in QEMU. Modify the ioeventfd code in QEMU to use IORING_OP_READ instead of IORING_OP_POLL_ADD + read(2) because it is faster to do it in a single read operation.
- A prototype demonstrated that there is a modest improvement in IOPS on <10 microsecond NVMe drives:
Operation BS QD IOPS Change randread 4k 1 112739 +1.7% randread 4k 64 504866 +0.068% randwrite 4k 1 110171 +5.7% randwrite 4k 64 516398 +5.3% (Full data here: https://gitlab.com/stefanha/virt-playbooks/-/commit/46161cd6c877759bd8ade826f953b7fd520abab7)
- Code for a prototype is available here:
https://gitlab.com/stefanha/qemu/-/commits/io_uring-eventnotifier-reads?ref_type=heads
Acceptance criteria
A list of verification conditions, successful functional tests, or expected outcomes in order to declare this story/task successfully completed.
- fio randread and randwrite with blocksize 4 KB and iodepths 1 and 64 does not degrade on a virtio-blk disk backed by a host NVMe block device. The guest has 4 vCPUs and the virtio-blk disks are configured with 4 IOThreads.
- is depended on by
-
RHEL-71996 Explore performance benefits of io_uring based event loop
-
- Closed
-