-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
rhel-10.1
-
No
-
None
-
1
-
rhel-virt-hwe-arm-1
-
ssg_virtualization
-
0
-
False
-
False
-
-
None
-
Split items
-
None
-
None
-
Unspecified
-
Unspecified
-
Unspecified
-
-
aarch64
-
None
What were you trying to do that didn't work?
Migrating vm backward from target host to source host fails with vtpm device sharing TPM state
Please provide the package NVR for which the bug is seen:
Both source and target hosts as below:
libvirt libvirt-11.3.0-1.el10.aarch64
qemu-kvm qemu-kvm-10.0.0-4.el10.aarch64
kernel-6.12.0-89.el10.aarch64+64k
How reproducible is this bug?:
100%
Steps to reproduce
- Setup migration environment
- setsebool virt_use_nfs=on on both hosts
- systemctl restart nfs-server on source host
- firewall-cmd --add-port=49152-49216/tcp --permanent --zone=public on both hosts
- firewall-cmd --reload on both hosts
- exportfs -o rw,no_root_squash *:/var/lib/avocado/data/avocado-vt/images; mkdir /var/lib/libvirt/migrate on source host
- mount 127.0.0.1:/var/lib/avocado/data/avocado-vt/images /var/lib/libvirt/migrate on source host
- mount <source_ip>:/var/lib/avocado/data/avocado-vt/images /var/lib/libvirt/migrate on target host
- Setup shared tpm envirnment
- exportfs -o rw,no_root_squash *:/var/tmp on source host
- mount 127.0.0.1:/var/tmp/ /var/lib/libvirt/swtpm on source host
- mount <source_ip>:/var/tmp/ /var/lib/libvirt/swtpm on target host
- Start vm
- virsh dumpxml vm1|grep disk -A3 -B3
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native' discard='unmap'/>
<source file='/var/lib/libvirt/migrate/jeos-27-aarch64-clone.qcow2' index='1'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk> - virsh dumpxml vm1|grep tpm -A3 -B3
<tpm model='tpm-tis'>
<backend type='emulator' version='2.0'>
<encryption secret='173152ea-65c3-4e15-bfa8-8f40c5fdd985'/>
<active_pcr_banks>
<sha256/>
</active_pcr_banks>
</backend>
<alias name='tpm0'/>
</tpm>
Secret xml:
<secret ephemeral='no' private='yes'>
<description>sample vTPM secret</description>
<usage type='vtpm'>
<name>VTPM_example</name>
</usage>
</secret> - virsh dumpxml vm1|grep disk -A3 -B3
- Migrate vm from source host to target host successfully
- virsh -c 'qemu:///system' migrate --live --verbose --domain avocado-vt-vm1 --desturi qemu+ssh://10.6.8.56/system
Migration: [100.00 %]
- virsh -c 'qemu:///system' migrate --live --verbose --domain avocado-vt-vm1 --desturi qemu+ssh://10.6.8.56/system
- Migration back from target host to source host
Failed. See below actual results.
Expected results
Migration backward should succeed
Actual results
$ virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://10.6.8.51/system
error: operation failed: migration failed. Message from the source host: operation failed: job 'migration out' failed: Sibling indicated error 1. Message from the destination host: operation failed: job 'migration in' failed: load of migration failed: Input/output error\n"
Source host qemu log:
2025-06-03T03:16:51.032498Z qemu-kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 0x1f
2025-06-03T03:16:51.032597Z qemu-kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
2025-06-03 03:16:51.053+0000: shutting down, reason=failed
2025-06-03T03:16:51.054311Z qemu-kvm: terminating on signal 15 from pid 227320 (/usr/sbin/virtqemud)
2025-06-03T03:16:51.054871Z qemu-kvm: tpm-emulator: Could not cleanly shutdown the TPM: Input/output error
Target host qemu log:
2025-06-03 03:16:43.473+0000: initiating migration
2025-06-03T03:16:51.043113Z qemu-kvm: Unable to shutdown socket: Transport endpoint is not connected
2025-06-03T03:16:51.043252Z qemu-kvm: Sibling indicated error 1
2025-06-03T03:16:59.165421Z qemu-kvm: terminating on signal 15 from pid 211180 (/usr/sbin/virtqemud)
2025-06-03T03:16:59.168893Z qemu-kvm: tpm-emulator: Could not cleanly shutdown the TPM: Input/output error
2025-06-03 03:16:59.766+0000: shutting down, reason=destroyed
- Additional info:
X86_64 does not fail.
If no shared tpm state, the migration back can succeed.