Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-145291

No response when execute "query-migrate" after cancel and retry multifd migration

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • None
    • Moderate
    • rhel-virt-core-live-migration
    • None
    • QE ack
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • Red Hat Enterprise Linux
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • All
    • Unspecified
    • None

      What were you trying to do that didn't work?
      Do multifd migration, cancel migration during migration is active;
      Restart multifd migration, sometimes hit no response when conduct "query-migrate"

      Please provide the package NVR for which bug is seen:
      hosts info: kernel-6.12.0-191.el10.s390x && qemu-kvm-10.1.0-11.el10.s390x
      guest info: kernel-6.12.0-191.el10.s390x

      How reproducible:
      3/100

      Steps to reproduce
      1.Boot a VM on src host
      2.Boot a VM on dst host with appending "-incoming defer"
      3.Enable multifd capability on src and dst host, set migration speed to 1M, multifd-channels to 4
      4.Start migration, during migration is active, cancel migration
      5.Restart a VM on dst host with appending "-incoming defer"
      6.Set multifd-channels to 2 in src qemu
      7.Enable multifd capability on dst host
      8.Change migration speed to 100M
      9.Migrate VM from src to dst host

      Expected results
      Migration finishes, VM works well after migration

      Actual results
      Check migration statistics via "query-migrate" qmp cmd after step 9.
      Sometimes no response when conduct "query-migrate":

      2026-01-28-07:26:46: Host(10.0.160.47) Sending qmp command :{"execute": "query-migrate", "id": "Y8SA9zVU"}
      2026-01-28-07:26:46: Host(10.0.160.47) Responding qmp command: {"return": {"expected-downtime": 300, "status": "active", "setup-time": 26, "total-time": 523, "ram": {"total": 4294967296, "postcopy-requests": 0, "dirty-sync-count": 1, "multifd-bytes": 68123328, "pages-per-second": 28160, "downtime-bytes": 0, "page-size": 4096, "remaining": 4226281472, "postcopy-bytes": 0, "mbps": 925.11375999999996, "transferred": 68123540, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 48, "duplicate": 180, "dirty-pages-rate": 0, "normal-bytes": 67944448, "normal": 16588}}, "id": "Y8SA9zVU"}
      ...
      2026-01-28-07:26:48: Host(10.0.160.47) Sending qmp command :{"execute": "query-migrate", "id": "quitSb2U"}
      Traceback (most recent call last):
      File "/home/ipa/runner.py", line 216, in _run
      getattr(self._case_dict[case], "run_case")(self._params)
      File "/home/ipa/virtkvmqe/migration_test_plan/multiple_fds/test_scenarios/RHEL_186122.py", line 218, in run_case
      run_sub_case()
      File "/home/ipa/virtkvmqe/migration_test_plan/multiple_fds/test_scenarios/RHEL_186122.py", line 161, in run_sub_case
      do_migration(src_remote_qmp, incoming_port, dst_host_ip, src_qemu_pid,
      File "/home/ipa/virtkvmqe/migration_test_plan/multiple_fds/test_scenarios/RHEL_186122.py", line 44, in do_migration
      if output['status'] == 'completed':
      ~~~~~~^^^^^^^^^^
      TypeError: 'NoneType' object is not subscriptable
      

       

      Note:

      1. Sometimes hit this issue when multifd migration completed. In this scenario, VM works well after migration, only get the above error;
      2. Sometimes hit this issue when multifd migration is active. In this scenario, VM works at the source side, but the dst qemu has been quitted

              virt-maint virt-maint
              rhn-support-xiaohli Xiaohui Li
              virt-maint virt-maint
              Xiaohui Li Xiaohui Li
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated: