-
Bug
-
Resolution: Unresolved
-
Blocker
-
CNV v4.15.8
-
None
-
3
-
False
-
-
False
-
None
-
---
-
---
-
-
CNV Storage 265
-
Critical
-
Customer Escalated
-
None
Description of problem:
VM snapshot fails when there are several VolumeSnapshotClasses that match the CDI driver even if we define the desired VolumeSnapshotClass in the StorageProfile .spec.snapshotClass
Version-Release number of selected component (if applicable):
kubevirt-hyperconverged-operator.v4.15.8
How reproducible:
Always
Steps to Reproduce:
1. Create an additional VolumeSnapshotClass for your CSI driver. None of them are annotated with "snapshot.storage.kubernetes.io/is-default-class" NAME DRIVER DELETIONPOLICY AGE ocs-storagecluster-rbdplugin-retain-snapclass openshift-storage.rbd.csi.ceph.com Retain 73m ocs-storagecluster-rbdplugin-snapclass openshift-storage.rbd.csi.ceph.com Delete 50d 2. Configure the StorageProfile to use one of the VolumeSnapshotClasses: apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: ocs-storagecluster-ceph-rbd-virtualization spec: snapshotClass: ocs-storagecluster-rbdplugin-snapclass 3. Create a VM snapshot
Actual results:
VM snapshot fails In the VM status we can see: status: volumeSnapshotStatuses: - enabled: false name: rootdisk reason: 2 matching VolumeSnapshotClasses for ocs-storagecluster-ceph-rbd-virtualization
Expected results:
VolumeSnapshotClass selected in the StorageProfile is honored.
Additional info: