-
Bug
-
Resolution: Done-Errata
-
Critical
-
CNV v4.14.0
-
None
-
8
-
False
-
-
False
-
CNV v4.16.0.rhel9-1312
-
---
-
---
-
-
CNV Virtualization Sprint 249, CNV Virtualization Sprint 250
-
No
When a live migration of a VM is failed, the VirtualMachineInstanceMigration custom resource does not state the reason that caused the migration to fail, which makes it hard to debug.
Currently the VMIM looks like this:
apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: annotations: kubevirt.io/latest-observed-api-version: v1 kubevirt.io/storage-observed-api-version: v1 kubevirt.io/workloadUpdateMigration: "" creationTimestamp: "2023-11-27T05:55:26Z" generateName: kubevirt-workload-update- generation: 1 labels: kubevirt.io/vmi-name: rhel9-colonial-shark name: kubevirt-workload-update-4tnj9 namespace: orenc resourceVersion: "2205490428" uid: 2556fd1e-7f0a-493a-b2af-c2698f4ae564 spec: vmiName: rhel9-colonial-shark status: phase: Failed phaseTransitionTimestamps: - phase: Pending phaseTransitionTimestamp: "2023-11-27T05:55:26Z" - phase: Scheduling phaseTransitionTimestamp: "2023-11-27T07:23:54Z" - phase: Failed phaseTransitionTimestamp: "2023-11-27T07:28:53Z"
Which is not very helpful, just saying the migration has failed after 4 minutes and 59 seconds.
If at least we could see there the progress percentage, we could know it's a performance issue and configure LiveMigration settings accordingly.
- links to
-
RHEA-2023:122979 OpenShift Virtualization 4.16.0 Images