-
Sub-task
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
Quality / Stability / Reliability
-
0.42
-
False
-
-
False
-
None
-
Description of problem:
The VM was having following disk:
% oc get vm rhel9-azure-gerbil-11 -o yaml |yq '.spec.template.spec.volumes,.spec.dataVolumeTemplates'
- dataVolume:
name: rhel9-azure-gerbil-11
name: rootdisk
I tried to change the disk driver from virtio to scsi. By doing this, it provisioned a new volume and mapped the new volume to the VM:
% oc get vm rhel9-azure-gerbil-11 -o yaml |yq '.spec.template.spec.volumes'
- dataVolume:
name: dv-rhel9-azure-gerbil-11-rootdisk-68bwex
name: rootdisk
This new DV is provisioned based on the data in the dataVolumeTemplates. So all the data in the VM will be lost and have to remap the old DV manually to recover the VM.
I cannot reproduce this issue in 4.19, looks like fixed here https://github.com/kubevirt-ui/kubevirt-plugin/pull/2737/files#diff-863d46976b39e1069b689f10d5c5136755bd1664066c802c599922c4bebe3a56L135 ?
Version-Release number of selected component (if applicable):
OpenShift Virtualization 4.18.17
How reproducible:
100%
Steps to Reproduce:
1. Create a new VM and note down the DV name attached to the VM. 2. In the configuration => storage => edit the "disk interface" of the VM and save 3. The status of the VM will change to provisioning and a new disk will be attached to the VM.
Actual results:
Editing a disk driver/interface of a VM is provisioning new data volume
Expected results:
It must only change the driver of the VM and shouldn't deploy a new DV
Additional info: