-
Bug
-
Resolution: Done-Errata
-
Blocker
-
CNV v4.19.0
-
None
-
Quality / Stability / Reliability
-
8
-
False
-
-
False
-
CNV v4.99.0.rhel9-2360, CNV v4.19.3.rhel9-10
-
Release Notes
-
-
Known Issue
-
Done
-
-
Yes
-
CNV Storage 274, CNV Storage 275
-
None
Description of problem:
Storage Migrate VM with MTC - Succeeded Take a VMSnaphot - Succeeded Try to restore from this snapshot - Failed
Version-Release number of selected component (if applicable):
4.19
How reproducible:
Always
Steps to Reproduce:
1. Create a VM 2. Storage Migrate with MTC 3. Snapshot a VM, try to restore
Actual results:
$ oc get vmrestore -n storage-migration-test-mtc-storage-class-migration restore-vm-for-test-1746621333-8673966-snapshot-1-1746623659661 NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR restore-vm-for-test-1746621333-8673966-snapshot-1-1746623659661 VirtualMachine vm-for-test-1746621333-8673966 false $ oc get vmrestore -n storage-migration-test-mtc-storage-class-migration restore-vm-for-test-1746621333-8673966-snapshot-1-1746623659661 -oyaml apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: creationTimestamp: "2025-05-07T13:14:21Z" finalizers: - snapshot.kubevirt.io/vmrestore-protection generation: 1 name: restore-vm-for-test-1746621333-8673966-snapshot-1-1746623659661 namespace: storage-migration-test-mtc-storage-class-migration ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: false kind: VirtualMachine name: vm-for-test-1746621333-8673966 uid: 75333f01-d5e7-4e46-a973-c857ca23dce1 resourceVersion: "1458486" uid: 536f96b0-fa42-415f-bdb5-f16df9dcc3c2 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: vm-for-test-1746621333-8673966 virtualMachineSnapshotName: vm-for-test-1746621333-8673966-snapshot-1 status: complete: false conditions: - lastProbeTime: null lastTransitionTime: "2025-05-07T13:14:22Z" reason: 'admission webhook "virtualmachine-validator.kubevirt.io" denied the request: DataVolumeTemplate entry spec.dataVolumeTemplate[0] must be referenced in the VMI template''s ''volumes'' list' status: "False" type: Progressing - lastProbeTime: null lastTransitionTime: "2025-05-07T13:14:22Z" reason: 'admission webhook "virtualmachine-validator.kubevirt.io" denied the request: DataVolumeTemplate entry spec.dataVolumeTemplate[0] must be referenced in the VMI template''s ''volumes'' list' status: "False" type: Ready deletedDataVolumes: - fedora-volume-mig-fmnp restores: - persistentVolumeClaim: restore-536f96b0-fa42-415f-bdb5-f16df9dcc3c2-dv-disk volumeName: dv-disk volumeSnapshotName: vmsnapshot-3e4a9c11-0926-4551-b6d3-f28d621431b0-volume-dv-disk
Restore DV was not created:
$ oc get dv -A -w NAMESPACE NAME PHASE PROGRESS RESTARTS AGE storage-migration-test-mtc-storage-class-migration fedora-volume Succeeded 100.0% 38m storage-migration-test-mtc-storage-class-migration fedora-volume-mig-fmnp Succeeded 100.0% 35m
Restore PVC was created:
$ oc get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE ... storage-migration-test-mtc-storage-class-migration fedora-volume Bound pvc-9b545dae-1d4c-4190-b4c4-792e609b6ecb 149Gi RWO hostpath-csi-basic <unset> 39m storage-migration-test-mtc-storage-class-migration fedora-volume-mig-fmnp Bound pvc-795420ea-65f0-446d-9973-923fd5ab2ddc 30Gi RWX ocs-storagecluster-ceph-rbd-virtualization <unset> 36m storage-migration-test-mtc-storage-class-migration restore-536f96b0-fa42-415f-bdb5-f16df9dcc3c2-dv-disk Bound pvc-23c99e3d-6fec-47b8-addd-b8db7a56b3e3 30Gi RWX ocs-storagecluster-ceph-rbd-virtualization <unset> 58s
Expected results:
Restore succeeded, restore DV created
Additional info:
VM -oyaml after trying to restore a snapshot:
[cloud-user@ocp-psi-executor-xl ~]$ oc get vm -n storage-migration-test-mtc-storage-class-migration vm-for-test-1746621333-8673966 -oyaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
kubemacpool.io/transaction-timestamp: "2025-05-07T13:13:58.656911403Z"
kubevirt.io/latest-observed-api-version: v1
kubevirt.io/storage-observed-api-version: v1
creationTimestamp: "2025-05-07T12:35:33Z"
finalizers:
- kubevirt.io/virtualMachineControllerFinalize
generation: 4
name: vm-for-test-1746621333-8673966
namespace: storage-migration-test-mtc-storage-class-migration
resourceVersion: "1458465"
uid: 75333f01-d5e7-4e46-a973-c857ca23dce1
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
creationTimestamp: null
name: fedora-volume-mig-fmnp
spec:
sourceRef:
kind: DataSource
name: fedora
namespace: openshift-virtualization-os-images
storage:
resources:
requests:
storage: 30Gi
storageClassName: hostpath-csi-basic
instancetype:
kind: VirtualMachineClusterInstancetype
name: u1.small
preference:
kind: VirtualMachineClusterPreference
name: fedora
runStrategy: Halted
template:
metadata:
creationTimestamp: null
labels:
debugLogs: "true"
kubevirt.io/domain: vm-for-test-1746621333-8673966
kubevirt.io/vm: vm-for-test-1746621333-8673966
spec:
architecture: amd64
domain:
devices:
disks:
- disk:
bus: virtio
name: dv-disk
- disk:
bus: virtio
name: cloudinitdisk
rng: {}
machine:
type: pc-q35-rhel9.4.0
resources: {}
evictionStrategy: None
volumes:
- dataVolume:
name: fedora-volume-mig-fmnp
name: dv-disk
- cloudInitNoCloud:
userData: |-
#cloud-config
chpasswd:
expire: false
password: password
user: fedora
ssh_pwauth: true ssh_authorized_keys:
[ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOxfMHCrEFjexQa0EqkugXGOlIFV9r0nkz9bS3b22bvBoHuMh8Oj7ePSC1cM2fZJcuJpjjyfcHYQl3l8tv9LC7uE/6MvyPiZPcNYzC/y9rl34Ey1k3GgLqTZk89n2BL7uROrwUqC6je6YhwmIeC+r/P1NeO5vp8JZI2IvBp+1vQVZNw9wR2YzDo/FgU17PiVIFiUN367zhfJnqa/HhFvxDoFLoLrcleFLVjibhCTrYYP7WHVFmE53YZhFHWtxLbYx5uw4+zoxfbNgSogO0b8iRtU5jlh8eFwt/qDYtNsTRULN37qEvWhvT6LTlMPlhm0ggYUimi8Avf28vMSoqivp root@exec1.rdocloud]
runcmd: ['grep ssh-rsa /etc/crypto-policies/back-ends/opensshserver.config || sudo update-crypto-policies --set LEGACY || true', "sudo sed -i 's/^#\\?PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config", 'sudo systemctl enable sshd', 'sudo systemctl restart sshd']
name: cloudinitdisk
updateVolumesStrategy: Migration
status:
conditions:
- lastProbeTime: "2025-05-07T13:14:05Z"
lastTransitionTime: "2025-05-07T13:14:05Z"
message: VMI does not exist
reason: VMINotExists
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
message: All of the VMI's DVs are bound and not running
reason: AllDVsReady
status: "True"
type: DataVolumesReady
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: LiveMigratable
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: StorageLiveMigratable
desiredGeneration: 4
instancetypeRef:
controllerRevisionRef:
name: vm-for-test-1746621333-8673966-u1.small-v1beta1-ee7b007b-5adf-41df-a150-fe7990d21d80-1
kind: VirtualMachineClusterInstancetype
name: u1.small
observedGeneration: 2
preferenceRef:
controllerRevisionRef:
name: vm-for-test-1746621333-8673966-fedora-v1beta1-6b942c70-acdf-4b66-b115-cb2dde251878-1
kind: VirtualMachineClusterPreference
name: fedora
printableStatus: Stopped
restoreInProgress: restore-vm-for-test-1746621333-8673966-snapshot-1-1746623659661
runStrategy: Halted
volumeSnapshotStatuses:
- enabled: true
name: dv-disk
- enabled: false
name: cloudinitdisk
reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]
volumeUpdateState:
volumeMigrationState:
migratedVolumes:
- destinationPVCInfo:
claimName: fedora-volume-mig-fmnp
volumeMode: Block
sourcePVCInfo:
claimName: fedora-volume
volumeMode: Filesystem
volumeName: dv-disk