-
Bug
-
Resolution: Unresolved
-
Normal
-
CNV v4.16.0, CNV v4.15.3
-
None
-
0.42
-
False
-
-
False
-
---
-
---
-
-
Important
-
None
Description of problem:
When installing OpenShift on top of another OpenShift Cluster using CNV and ODF configured to an external Ceph and then installing KubeVirt on top of that cluster the following happens: When os images are imported they are imported as VolumeSnapshots - not sure if during the import actual PVCs (and PVs) are available - but when the import has completed the virtualization-os-images namespace has a number of dangling VolumeSnapshots without associated PVCs or PVs - even thought the VolumeSnapshot itself lists a PVC as its owner. This creates an issue when tearing down that cluster because Ceph can not clean up the PVCs because of these dangling VolumeSnapshots which leads to wasted space in Ceph that needs manual cleanup. Workaround that we found (courtesy of [~alitke@redhat.com]: - Set .spec.featureGates.enableCommonBootImageImport: false on the HyperConverged custom resource during deployment - Wait for HyperConverged CR to be available - Find the default storage class on the cluster - Find the associated StorageProfile and patch .spec.dataImportCronSourceFormat: pvc - Patch HyperConverged to now import the images: .spec.featureGates.enableCommonBootImageImport: true
Additionally I added logic to delete all VMs on the cluster before we destroy our cluster in order to also clean up VolumeSnapshots that may have been created by a user.
Version-Release number of selected component (if applicable):
4.15.3, 4.16.0
How reproducible:
Every time
Steps to Reproduce:
See above
Actual results:
Dangling VolumeSnapshots
Expected results:
No Dangling VolumeSnapshots