Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-102925

The second time migration fails with shared vtpm state

Linking RHIVOS CVEs to...Migration: Automation ...SWIFT: POC ConversionSync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • rhel-10.2
    • rhel-10.1
    • libvirt
    • libvirt-11.10.0-2.el10
    • No
    • Important
    • 1
    • rhel-virt-core-libvirt-1
    • 8
    • False
    • False
    • Hide

      None

      Show
      None
    • Yes
    • Libvirt Bugs already in Sprint
    • Unspecified Release Note Type - Unknown
    • Unspecified
    • Unspecified
    • Unspecified
    • aarch64
    • None

      What were you trying to do that didn't work?

      Migrating a vm twice, and the second time will fail

      Please provide the package NVR for which the bug is seen:

      kernel-6.12.0-103.el10.aarch64+64k

      libvirt-11.5.0-1.el10.aarch64
      qemu-kvm-10.0.0-7.el10.aarch64
      edk2-aarch64-20250523-2.el10.noarch
      swtpm-0.9.0-5.el10.aarch64
      libtpms-0.9.6-11.el10.aarch64

      How reproducible is this bug?:

      100%

      Steps to reproduce

      1. Setup nfs migration environment
      2. Also setup shared vtpm state folder

        exportfs -o rw,no_root_squash *:/var/tmp on source host
        mount 127.0.0.1:/var/tmp/ /var/lib/libvirt/swtpm on source host
        mount <source_ip>:/var/tmp/ /var/lib/libvirt/swtpm on target host
         

      3.  Start vm and migrate the vm
         

        [root@ampere-mtsnow-altramax-16 ~]# virsh start vm1
        Domain 'vm1' started
        virsh -c 'qemu:///system' migrate --live --p2p --verbose --domain vm1 --desturi qemu+ssh://10.6.8.55/system  
        Migration: [100.00 %] 
        virsh list --all
         Id   Name             State
        ---------------------------------
         -    vm1              shut off

        TPM xml:
        <tpm model='tpm-tis'>
        <backend type='emulator' version='2.0'/>
        </tpm>

      4. On target host, destroy the vm
         

        [root@ampere-mtsnow-altramax-47 ~]# virsh destroy vm1
        Domain 'vm1' destroyed

      5. On source host, start and migrate the vm again

        [root@ampere-mtsnow-altramax-16 ~]# virsh start vm1
        Domain 'vm1' started

        [root@ampere-mtsnow-altramax-16 ~]# virsh -c 'qemu:///system' migrate --live --p2p --verbose --domain vm1 --desturi qemu+ssh://10.6.8.55/system  
        error: Operation not supported: the running swtpm does not support migration with shared storage

      Expected results

      The migration should be successful repeatedly.

      Actual results

      See above log.

      Other info:
      On source host:

      [root@ampere-mtsnow-altramax-16 ~]# df -m
      127.0.0.1:/var/lib/avocado/data/avocado-vt/images 70832 12821 58011 19% /var/lib/libvirt/migrate
      127.0.0.1:/var/tmp 70832 12821 58011 19% /var/lib/libvirt/swtpm

      tss 172611 1 0 01:28 ? 00:00:00 /usr/bin/swtpm socket --ctrl type=unixio,path=/run/libvirt/qemu/swtpm/28-vm1-swtpm.sock,mode=0600 --tpmstate dir=/var/lib/libvirt/swtpm/2e436b81-51e8-495a-b2b5-2186a0cd6ca8/tpm2,mode=0600 --log file=/var/log/swtpm/libvirt/qemu/vm1-swtpm.log --terminate --tpm2

      On target host:

      [root@ampere-mtsnow-altramax-47 ~]# df -m
      10.6.8.42:/var/lib/avocado/data/avocado-vt/images 70832 12821 58011 19% /var/lib/libvirt/migrate
      10.6.8.42:/var/tmp 70832 12821 58011 19% /var/lib/libvirt/swtpm

      If the shared vtpm state folder is not configured, this problem does not occur.

              jdenemar@redhat.com Jiri Denemark
              rhn-support-dzheng Dan Zheng
              Jiri Denemark Jiri Denemark
              Badriprasad Varadaraj Badriprasad Varadaraj
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

                Created:
                Updated: