-
Bug
-
Resolution: Done-Errata
-
Normal
-
2.6.0
-
None
-
False
-
None
-
False
-
-
-
Critical
Description of problem:
When performing a cold migration, there's a leftover PVC in "Lost" state that is not deleted even after archiving and deleting the migration plan.
Version-Release number of selected component (if applicable):
OCP 4.14.21
OCP Virt 4.14.5
MTV 2.6.0
How reproducible:
Always
Steps to Reproduce:
- Perform a cold migration between VMware and OCP. VDDK image is provided.
Actual results:
There's a PVC in Lost state:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb Lost pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434 0 ocs-storagecluster-ceph-rbd-virtualization 18m vmware-test-vm-4804-9jd56 Bound pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434 16Gi RWX ocs-storagecluster-ceph-rbd-virtualization 18m
The PVC is annotated with cdi.kubevirt.io/storage.pod.retainAfterCompletion: "true"
apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "" cdi.kubevirt.io/storage.condition.running: "false" cdi.kubevirt.io/storage.condition.running.message: "" cdi.kubevirt.io/storage.condition.running.reason: Completed cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.import.importPodName: importer-prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb cdi.kubevirt.io/storage.import.source: none cdi.kubevirt.io/storage.pod.phase: Succeeded cdi.kubevirt.io/storage.pod.restarts: "0" cdi.kubevirt.io/storage.pod.retainAfterCompletion: "true" <--- cdi.kubevirt.io/storage.populator.kind: VolumeImportSource cdi.kubevirt.io/storage.preallocation.requested: "false" pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" sidecar.istio.io/inject: "false" volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com creationTimestamp: "2024-04-29T09:47:59Z" finalizers: - kubernetes.io/pvc-protection labels: app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.14.5 name: prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb namespace: case-03802286 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: PersistentVolumeClaim name: vmware-test-vm-4804-9jd56 uid: 6c399c50-55d9-4db7-bfa1-2fece2993dfb resourceVersion: "58593199" uid: 3ab33ddc-eaa8-45e7-b82a-96874663c434 spec: accessModes: - ReadWriteMany resources: requests: storage: "17179869184" storageClassName: ocs-storagecluster-ceph-rbd-virtualization volumeMode: Block volumeName: pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434 status: phase: Lost <----
Expected results:
No PVCs in Lost state
Additional info:
I see this pull request to make FEATURE_RETAIN_PRECOPY_IMPORTER_PODS disabled by default. I think that should fix this problem.
- is duplicated by
-
CNV-42509 MTV - imported VMWare fails to cleanup Lost pvc's and delete vm fails to delete pvc's
- Closed
-
MTV-1238 Prime PVCs in Lost state stay after VM migration
- Closed
-
MTV-1239 importer pods are not deleted after warm VM migration from RHV
- Closed
- relates to
-
MTV-1293 populator pod not cleaned up after successful migration and pvc left in lost
- Closed
- links to
-
RHBA-2024:132884 MTV 2.6.3 Images