-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
CNV v4.21.0
-
None
-
Product / Portfolio Work
-
8
-
False
-
-
False
-
None
-
-
CNV Storage Sprint 284, CNV Storage Sprint 285
-
None
Description of problem:
Storage migration from gcnv-flex to gcnv-flex (same sc) on running VMs fails when migrating using GCNV storage class (gcnv-flex) . The storage data copy succeeds, but the subsequent migration fails with "Permission denied" when QEMU on the target node attempts to access the disk.
Although this issue is triggered by storage migration, the failure occurs because the storage migration implicitly triggers live migration when the VM is running.(see Additional info section below)
virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor:
qemu-kvm: -blockdev {"driver":"file","filename":"/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img",...}:
Could not open '/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img': Permission denied')
Version-Release number of selected component (if applicable):
cnv-4.21 with gcnv-flex storage-class migrating to the same storage class - current cluster to use test-gcnv6
How reproducible:
100% all the time when trying to do storage mig on gcnv-flex
Steps to Reproduce:
1. Create namespace and running VM
# Create test namespace
oc create namespace storage-mig-test
oc create namespace test-mig-ns
# Create a running VM using GCNV storage class
oc apply f <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: test-vm-for-migration
namespace: storage-mig-test
spec:
runStrategy: Always
instancetype:
kind: VirtualMachineClusterInstancetype
name: u1.small
preference:
kind: VirtualMachineClusterPreference
name: fedora
template:
spec:
domain:
devices: {}
volumes:
- name: rootdisk
dataVolume:
name: test-vm-dv
dataVolumeTemplates:
- metadata:
name: test-vm-dv
spec:
sourceRef:
kind: DataSource
name: fedora
namespace: openshift-virtualization-os-images
storage:
storageClassName: gcnv-flex
EOF
# Wait for VM to be running
oc get vm -n storage-mig-test -w
2. Create storage migration plan
oc apply f <<EOF
apiVersion: migrations.kubevirt.io/v1alpha1
kind: MultiNamespaceVirtualMachineStorageMigrationPlan
metadata:
name: test-mig-plan
namespace: test-mig-ns
spec:
namespaces:
- name: storage-mig-test
virtualMachines:
- name: test-vm-for-migration
targetMigrationPVCs:
- volumeName: rootdisk
destinationPVC:
name: test-vm-dv-mig
storageClassName: gcnv-flex
EOF
3. Trigger migration
oc apply f <<EOF
apiVersion: migrations.kubevirt.io/v1alpha1
kind: MultiNamespaceVirtualMachineStorageMigration
metadata:
name: test-migration
namespace: test-mig-ns
spec:
multiNamespaceVirtualMachineStorageMigrationPlanRef:
name: test-mig-plan
EOF
wait for around minutes (5 min) and then try to observe following:
#Watch migration status - gets stuck in WaitForLiveMigrationToComplete*
oc get MultiNamespaceVirtualMachineStorageMigration -n test-mig-ns -w
#Check live migration failures*
oc get virtualmachineinstancemigration -n storage-mig-test
#See the Permission denied error*
oc get vmi test-vm-for-migration -n storage-mig-test -o yaml | grep -A30 "migrationState:"
#Check events*
oc get events -n storage-mig-test --sort-by='.lastTimestamp' | tail -20
Could not open '/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img': Permission denied
Actual results:
Storage migration gets stuck in WaitForLiveMigrationToComplete phase. The data copy to the new PVC succeeds, but live migration fails repeatedly with "Permission denied" error. QEMU on the target node cannot access the disk file. The VM keeps running on the original storage but migration never completes.
oc get events -n storage-mig-test --sort-by='.lastTimestamp' | tail -10
29s Normal Started pod/virt-launcher-test-vm-for-migration-bp5hs Started container compute
28s Normal PreparingTarget virtualmachineinstance/test-vm-for-migration VirtualMachineInstance Migration Target Prepared.
28s Normal SuccessfulHandOver virtualmachineinstancemigration/kubevirt-workload-update-kd7vt Migration target pod is ready for preparation by virt-handler.
28s Normal PreparingTarget virtualmachineinstance/test-vm-for-migration Migration Target is listening at 10.128.3.150, on ports: 38211,45823,40727
28s Normal Migrating virtualmachineinstance/test-vm-for-migration VirtualMachineInstance is migrating.
27s Warning Migrated virtualmachineinstance/test-vm-for-migration VirtualMachineInstance migration uid 7dd08cad-364b-48b6-9cc9-10eaf435f172 failed. reason:virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2026-02-08T19:20:25.713554Z qemu-kvm: -blockdev {"driver":"file","filename":"/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap","cache":{"direct":true,"no-flush":false}}: Could not open '/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img': Permission denied')
27s Warning FailedMigration virtualmachineinstancemigration/kubevirt-workload-update-kd7vt source node reported migration failed
26s Warning MigrationBackoff virtualmachineinstance/test-vm-for-migration backoff migrating vmi storage-mig-test/test-vm-for-migration
26s Normal SuccessfulCreate virtualmachineinstance/test-vm-for-migration Created Migration kubevirt-workload-update-f4xfg for automated workload update
24s Normal Killing pod/virt-launcher-test-vm-for-migration-bp5hs Stopping container guest-console-log
oc describe MultiNamespaceVirtualMachineStorageMigration test-migration -n test-mig-ns
Name: test-migration
Namespace: test-mig-ns
Labels: <none>
Annotations: <none>
API Version: migrations.kubevirt.io/v1alpha1
Kind: MultiNamespaceVirtualMachineStorageMigration
Metadata:
Creation Timestamp: 2026-02-08T19:01:40Z
Generation: 1
Resource Version: 5651187
UID: 5ce9ca0c-168e-4f4c-b0a7-9044f01ff536
Spec:
Multi Namespace Virtual Machine Storage Migration Plan Ref:
Name: test-mig-plan
Status:
Namespaces:
Name: storage-mig-test
Phase: WaitForLiveMigrationToComplete
Running Migrations:
Name: test-vm-for-migration
Events: <none>
Expected results:
Storage migration completes successfully. The VM's disk is migrated to the new PVC and the VM continues running without interruption.
Additional info:
When the VM is running , in storage live migration, the storage migration controller followed this logic:
Copy disk data to the destination PVC
✅ This part succeeded
Try to switch the running VM to the new disk > execute live migration(because vm is running)
Create a VirtualMachineInstanceMigration CR
(this is where live migration starts)
KubeVirt tries to live-migrate the VM
❌ Fails due to disk permission
Storage migration waits forever in:
WaitForLiveMigrationToComplete
all migrations attempts will basiclly fails oc get pods -n storage-mig-test NAME READY STATUS RESTARTS AGE virt-launcher-test-vm-for-migration-6z8fs 0/2 Error 0 68m virt-launcher-test-vm-for-migration-bp5hs 0/2 Error 0 63m virt-launcher-test-vm-for-migration-c79bn 2/2 Running 0 87m virt-launcher-test-vm-for-migration-cm6zg 0/2 Error 0 20m virt-launcher-test-vm-for-migration-dq846 0/2 Error 0 25m virt-launcher-test-vm-for-migration-dsqp6 0/2 Error 0 52m virt-launcher-test-vm-for-migration-ffp78 0/2 Error 0 58m virt-launcher-test-vm-for-migration-ftjp7 0/2 Error 0 47m virt-launcher-test-vm-for-migration-hm9jq 0/2 Error 0 14m virt-launcher-test-vm-for-migration-lgp67 0/2 Error 0 9m29s virt-launcher-test-vm-for-migration-ln945 0/2 Error 0 4m5s virt-launcher-test-vm-for-migration-lxtnc 0/2 Error 0 73m virt-launcher-test-vm-for-migration-r6wn9 0/2 Error 0 71m virt-launcher-test-vm-for-migration-tbcrd 0/2 Error 0 41m virt-launcher-test-vm-for-migration-ww7pp 0/2 Error 0 36m virt-launcher-test-vm-for-migration-z2l99 0/2 Error 0 73m virt-launcher-test-vm-for-migration-zhwgn 0/2 Completed 0 81m virt-launcher-test-vm-for-migration-zxxzr 0/2 Error 0 31m