-
Bug
-
Resolution: Done-Errata
-
Normal
-
CNV v4.18.0
-
None
-
Quality / Stability / Reliability
-
3
-
False
-
-
False
-
CNV v4.18.5.rhel9-8
-
-
CNV Storage Sprint 279
-
Moderate
-
None
Description of problem:
When trying to storage live migrate a VM with one hotplugged disk from HPP to ceph-rbd the migration starts, but the target virt-launcher pod fails and I see the following error: error encountered while generating migration parameters: failed to invoke qemu-img: exit status 1 The virt-launcher goes into Error state and another is eventually created with the same result.
Version-Release number of selected component (if applicable):
4.18.0
How reproducible:
100%
Steps to Reproduce:
1. Create a VM from a template with the standard 30G boot disk. In my cluster the default storage class is HPP, and thus the boot disk is HPP as well. 2. Start the VM 3. Add a disk to the running VM causing a hotplug to happen. I created a 1G disk for testing purposes. This disk is also on HPP 4. Create target disks on another storage class like ceph-rbd that are sufficiently large. I used a DataVolume to ensure I got the proper size. 5. Modify the VM spec to the new volumes and add the updateVolumesStrategy: Migration field to the spec. 6. This triggers the migration attempt. Note the target virt-launcher is created, and I can also see it create the target hotplug pod. 7. The virt-launcher goes into error state after a few seconds. I suspect the disk is not fully hotplugged into the target virt-launcher pod yet, and this causes the qemu-img info command to fail.
Actual results:
Failed live migration with virt-launcher pod in error state
Expected results:
Completed live migration
Additional info:
Maybe assume it could take a little bit before the volume is visible inside the target virt-launcher and retry the command a few times with a backoff?
- links to
-
RHEA-2025:155423
OpenShift Virtualization 4.18.21 Images
- mentioned on