Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-71996

Explore performance benefits of io_uring based event loop

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Major Major
    • None
    • None
    • qemu-kvm / Storage
    • None
    • rhel-virt-storage
    • ssg_virtualization
    • virt-storage Sprint 6, Virt storage Sprint8 2025-08, VirtStorage Sprint9 2025-08,09, VirtStorage Sprint10 CY250924
    • 5
    • False
    • Hide

      None

      Show
      None
    • None

      Tracing of the qemu-kvm process in the customer case has shown that ppoll() is a relatively expensive system call, especially with many file descriptors.

      We could probably reduce latencies by using io_uring. QEMU has fdmon-io_uring.c, but it is effectively dead code today because it automatically disables itself in every code path.

      The optimisation that would make this most interesting, hasn't been made in the existing implementation, though: Instead of using IORING_OP_POLL_ADD to poll eventfds and then clear them in separate read() syscalls, we can directly let io_uring read from the eventfd and when the read completes, we know an event has happened and don't need to do anything else to clear the eventfd any more.

      Another related question is if it's optimal to have one eventfd per virtqueue, or if one eventfd per device and iothread would be enough.

      The goal of this task is to explore the options that io_uring could give us to improve performance and possibly implement a solution upstream.

              shajnocz@redhat.com Stefan Hajnoczi
              kwolf@redhat.com Kevin Wolf
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: