Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-73842

Guest with interface enable multi-queue crashed on the target host when do migration during guest os rebooting

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • rhel-9.6
    • qemu-kvm / Networking
    • No
    • Moderate
    • rhel-sst-virtualization-networking
    • ssg_virtualization
    • 8
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • x86_64, aarch64
    • None

      What were you trying to do that didn't work?

      Guest with interface enabled mutliqueue crashed on target host when do migration during guest os rebooting

      What is the impact of this issue to you?

      Please provide the package NVR for which the bug is seen:

      qemu-kvm-9.1.0-9.el9.aarch64

      libvirt-10.10.0-3.el9.aarch64

      How reproducible is this bug?:

      50%

      Steps to reproduce

      1. Start a guest with interface enabled multiqueue:
      #virsh dumpxml vm3 --xpath //interface
       <interface type="network">
        <mac address="52:54:00:c7:3e:6d"/>
        <source network="default" portid="ad2d6879-7cf0-4c44-9e39-9316d5397350" bridge="virbr0"/>
        <target dev="vnet27"/>
        <model type="virtio"/>
        <driver queues="2"/>
        <alias name="net0"/>
        <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
      </interface>
      

          2.Do migration during reboot the guest:

      # virsh reboot vm3; sleep 1; virsh migrate vm3 qemu+ssh://*.test.com/system --live --verbose --persistent
      Domain 'vm3' is being rebootedMigration: [100.00 %]error: operation failed: domain is no longer running
       

         3.Check coredump file on target host:

       # coredumpctl list
      TIME                          PID UID GID SIG     COREFILE EXE                   SIZE  
      Tue 2025-01-14 03:57:01 EST  8776 107 107 SIGSEGV present  /usr/libexec/qemu-kvm 3.2M

      4. The backtrace are as follows:

      Core was generated by `/usr/libexec/qemu-kvm -name guest=vm3,debug-threads=on -S -object {"qom-type":"'.
      Program terminated with signal SIGSEGV, Segmentation fault.
      #0  aio_bh_enqueue (bh=0x0, new_flags=4) at ../util/async.c:74
      warning: 74    ../util/async.c: No such file or directory
      [Current thread is 1 (Thread 0xffffa73b7160 (LWP 16296))]
      (gdb) 
      (gdb) t a a btThread 7 (Thread 0xfffe8ebfe900 (LWP 16325)):
      #0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0xaaaaffdbd218) at futex-internal.c:57
      #1  __futex_abstimed_wait_common (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0xaaaaffdbd218) at futex-internal.c:87
      #2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0xaaaaffdbd218, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
      #3  0x0000ffffa7f4ff60 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0xaaaaffdbd228, cond=0xaaaaffdbd1f0) at pthread_cond_wait.c:504
      #4  ___pthread_cond_wait (cond=0xaaaaffdbd1f0, mutex=0xaaaaffdbd228) at pthread_cond_wait.c:619
      #5  0x0000aaaad9292efc in qemu_cond_wait_impl (cond=0xaaaaffdbd218, mutex=0xaaaaffdbd228, file=0xaaaad92d5b6c "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225
      #6  0x0000aaaad8c70aa0 in vnc_worker_thread_loop (queue=0xaaaaffdbd1f0) at ../ui/vnc-jobs.c:248
      #7  vnc_worker_thread (arg=arg@entry=0xaaaaffdbd1f0) at ../ui/vnc-jobs.c:362
      #8  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaffdbd290) at ../util/qemu-thread-posix.c:541
      #9  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #10 0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79Thread 6 (Thread 0xfffe8fbfe900 (LWP 16324)):
      #0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0xaaaaff8666e8) at futex-internal.c:57
      #1  __futex_abstimed_wait_common (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0xaaaaff8666e8) at futex-internal.c:87
      #2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0xaaaaff8666e8, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
      #3  0x0000ffffa7f4ff60 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0xaaaada08bf40 <bql>, cond=0xaaaaff8666c0) at pthread_cond_wait.c:504
      #4  ___pthread_cond_wait (cond=0xaaaaff8666c0, mutex=0xaaaada08bf40 <bql>) at pthread_cond_wait.c:619
      #5  0x0000aaaad9292efc in qemu_cond_wait_impl (cond=0xaaaaff8666e8, mutex=0xaaaada08bf40 <bql>, file=0xaaaad92f2280 "../system/cpus.c", line=462) at ../util/qemu-thread-posix.c:225
      #6  0x0000aaaad8d309c4 in qemu_wait_io_event (cpu=cpu@entry=0xaaaaff874fc0) at ../system/cpus.c:462
      #7  0x0000aaaad90e52cc in kvm_vcpu_thread_fn (arg=arg@entry=0xaaaaff874fc0) at ../accel/kvm/kvm-accel-ops.c:55
      #8  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaff88d150) at ../util/qemu-thread-posix.c:541
      #9  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #10 0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79
      Thread 5 (Thread 0xffffa4e4e900 (LWP 16323)):
      #0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0xaaaaff7df238) at futex-internal.c:57
      --Type <RET> for more, q to quit, c to continue without paging--
      #1  __futex_abstimed_wait_common (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0xaaaaff7df238) at futex-internal.c:87
      #2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0xaaaaff7df238, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at futex-internal.c:139
      #3  0x0000ffffa7f4ff60 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0xaaaada08bf40 <bql>, cond=0xaaaaff7df210) at pthread_cond_wait.c:504
      #4  ___pthread_cond_wait (cond=0xaaaaff7df210, mutex=0xaaaada08bf40 <bql>) at pthread_cond_wait.c:619
      #5  0x0000aaaad9292efc in qemu_cond_wait_impl (cond=0xaaaaff7df238, mutex=0xaaaada08bf40 <bql>, file=0xaaaad92f2280 "../system/cpus.c", line=462) at ../util/qemu-thread-posix.c:225
      #6  0x0000aaaad8d309c4 in qemu_wait_io_event (cpu=cpu@entry=0xaaaaff829d60) at ../system/cpus.c:462
      #7  0x0000aaaad90e52cc in kvm_vcpu_thread_fn (arg=arg@entry=0xaaaaff829d60) at ../accel/kvm/kvm-accel-ops.c:55
      #8  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaff86cae0) at ../util/qemu-thread-posix.c:541
      #9  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #10 0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79Thread 4 (Thread 0xffffa576e900 (LWP 16322)):
      #0  0x0000ffffa7fb0fa0 in __GI___poll (fds=0xffff9c0135e0, nfds=3, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:41
      #1  0x0000ffffa842af20 in g_main_context_poll (priority=<optimized out>, n_fds=3, fds=0xffff9c0135e0, timeout=<optimized out>, context=0xaaaaff6c52d0) at ../glib/gmain.c:4458
      #2  g_main_context_iterate.constprop.0 (context=0xaaaaff6c52d0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4150
      #3  0x0000ffffa83d471c in g_main_loop_run (loop=0xaaaaff6c5390) at ../glib/gmain.c:4353
      #4  0x0000aaaad915a374 in iothread_run (opaque=opaque@entry=0xaaaaff6c9450) at ../iothread.c:70
      #5  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaff6c53b0) at ../util/qemu-thread-posix.c:541
      #6  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #7  0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79Thread 3 (Thread 0xffffa62ce900 (LWP 16319)):
      #0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0xffffa62cdf38, op=393, expected=0, futex_word=0xaaaaff693d68) at futex-internal.c:57
      #1  __futex_abstimed_wait_common (cancel=true, private=0, abstime=0xffffa62cdf38, clockid=0, expected=0, futex_word=0xaaaaff693d68) at futex-internal.c:87
      #2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0xaaaaff693d68, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0xffffa62cdf38, private=private@entry=0) at futex-internal.c:139
      #3  0x0000ffffa7f50270 in __pthread_cond_wait_common (abstime=0xffffa62cdf38, clockid=0, mutex=0xaaaaff693cd0, cond=0xaaaaff693d40) at pthread_cond_wait.c:504
      --Type <RET> for more, q to quit, c to continue without paging--
      #4  ___pthread_cond_timedwait64 (cond=0xaaaaff693d40, mutex=0xaaaaff693cd0, abstime=0xffffa62cdf38) at pthread_cond_wait.c:644
      #5  0x0000aaaad9293114 in qemu_cond_timedwait_ts (cond=0xaaaaff693d68, cond@entry=0xaaaaff693d40, mutex=mutex@entry=0xaaaaff693cd0, ts=0x0, ts@entry=0xffffa62cdf38, file=file@entry=0xaaaad9386811 "../util/thread-pool.c", line=line@entry=91) at ../util/qemu-thread-posix.c:239
      #6  0x0000aaaad929306c in qemu_cond_timedwait_impl (cond=0xaaaaff693d40, mutex=0xaaaaff693cd0, ms=10000, file=0xaaaad9386811 "../util/thread-pool.c", line=91) at ../util/qemu-thread-posix.c:253
      #7  0x0000aaaad92acfb4 in worker_thread (opaque=opaque@entry=0xaaaaff693cc0) at ../util/thread-pool.c:91
      #8  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaff791960) at ../util/qemu-thread-posix.c:541
      #9  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #10 0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79Thread 2 (Thread 0xffffa6cfe900 (LWP 16318)):
      #0  0x0000ffffa7f85b64 in __GI___clock_nanosleep (clock_id=<optimized out>, clock_id@entry=0, flags=flags@entry=0, req=req@entry=0xffffa6cfdf58, rem=rem@entry=0xffffa6cfdf48) at ../sysdeps/unix/sysv/linux/clock_nanosleep.c:48
      #1  0x0000ffffa7f8ac8c in __GI___nanosleep (req=req@entry=0xffffa6cfdf58, rem=rem@entry=0xffffa6cfdf48) at ../sysdeps/unix/sysv/linux/nanosleep.c:25
      #2  0x0000ffffa83ff140 in g_usleep (microseconds=microseconds@entry=10000) at ../glib/gtimer.c:277
      #3  0x0000aaaad92a0028 in call_rcu_thread (opaque=<optimized out>) at ../util/rcu.c:270
      #4  0x0000aaaad9293ac4 in qemu_thread_start (args=0xaaaaff6559e0) at ../util/qemu-thread-posix.c:541
      #5  0x0000ffffa7f50c28 in start_thread (arg=0x80e140) at pthread_create.c:443
      #6  0x0000ffffa7fbb21c in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:79
      Thread 1 (Thread 0xffffa73b7160 (LWP 16296)):
      #0  aio_bh_enqueue (bh=0x0, new_flags=4) at ../util/async.c:74
      #1  qemu_bh_delete (bh=0x0) at ../util/async.c:250
      #2  0x0000aaaad9044168 in virtio_net_del_queue (n=<optimized out>, index=<optimized out>) at ../hw/net/virtio-net.c:2983
      #3  0x0000aaaad904424c in virtio_net_change_num_queue_pairs (n=0xaaab017590e0, new_max_queue_pairs=2) at ../hw/net/virtio-net.c:3013
      #4  virtio_net_set_multiqueue (n=n@entry=0xaaab017590e0, multiqueue=<optimized out>) at ../hw/net/virtio-net.c:3030
      #5  0x0000aaaad9040188 in virtio_net_set_features (vdev=0xaaab017590e0, features=0) at ../hw/net/virtio-net.c:961
      #6  0x0000aaaad9064410 in virtio_set_features_nocheck (vdev=0xaaab017590e0, val=0) at ../hw/virtio/virtio.c:3093
      #7  virtio_set_features_nocheck_bh (opaque=0xffffa63df938) at ../hw/virtio/virtio.c:3110
      #8  0x0000aaaad92a8b28 in aio_bh_call (bh=0xffff9c01c9c0) at ../util/async.c:171
      #9  aio_bh_poll (ctx=ctx@entry=0xaaaaff6c1b20) at ../util/async.c:218
      #10 0x0000aaaad928f340 in aio_dispatch (ctx=0xaaaaff6c1b20) at ../util/aio-posix.c:423
      #11 0x0000aaaad92a98b4 in aio_ctx_dispatch (source=0x5, callback=0x0, user_data=<optimized out>) at ../util/async.c:360
      #12 0x0000ffffa83d50c0 in g_main_dispatch (context=0xaaaaff65d100) at ../glib/gmain.c:3364
      --Type <RET> for more, q to quit, c to continue without paging--
      #13 g_main_context_dispatch (context=0xaaaaff65d100) at ../glib/gmain.c:4079
      #14 0x0000aaaad92aa0bc in glib_pollfds_poll () at ../util/main-loop.c:287
      #15 os_host_main_loop_wait (timeout=<optimized out>) at ../util/main-loop.c:310
      #16 main_loop_wait (nonblocking=<optimized out>, nonblocking@entry=-495482768) at ../util/main-loop.c:589
      #17 0x0000aaaad8d3c074 in qemu_main_loop () at ../system/runstate.c:826
      #18 0x0000aaaad91f71e0 in qemu_default_main () at ../system/main.c:37
      #19 0x0000ffffa7ef7280 in __libc_start_call_main (main=main@entry=0xaaaad91f71f8 <main>, argc=argc@entry=119, argv=argv@entry=0xffffe2778a78) at ../sysdeps/nptl/libc_start_call_main.h:58
      #20 0x0000ffffa7ef7358 in __libc_start_main_impl (main=0xaaaad91f71f8 <main>, argc=119, argv=0xffffe2778a78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=<optimized out>) at ../csu/libc-start.c:389
      #21 0x0000aaaad8c435b0 in _start ()
       

      Expected results

      Migration completes successfully.

      Actual results

      Guest with interface enable mutliqueue crashed on target host when do migration during guest os rebooting.

      Additional info:

      Please see the qemu log and guest xml in the attachment.

        1. vm3.log
          6 kB
        2. vm3.xml
          8 kB

              aodaki Akihiko Odaki
              rhn-support-yafu Yan Fu
              virt-maint virt-maint
              virt-bugs virt-bugs
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated: