Uploaded image for project: 'Migration Toolkit for Virtualization'
  1. Migration Toolkit for Virtualization
  2. MTV-1095

Leftover PVCs in Lost state after cold migrations

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Normal Normal
    • 2.6.3
    • 2.6.0
    • Controller
    • None
    • Critical

      Description of problem:

      When performing a cold migration, there's a leftover PVC in "Lost" state that is not deleted even after archiving and deleting the migration plan.

       

      Version-Release number of selected component (if applicable):

      OCP 4.14.21

      OCP Virt 4.14.5

      MTV 2.6.0

       

      How reproducible:

      Always

       

      Steps to Reproduce:

      1. Perform a cold migration between VMware and OCP. VDDK image is provided.

       

      Actual results:

      There's a PVC in Lost state:

       

      $ oc get pvc
      NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                                 AGE
      prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb   Lost     pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434   0                         ocs-storagecluster-ceph-rbd-virtualization   18m
      vmware-test-vm-4804-9jd56                    Bound    pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434   16Gi       RWX            ocs-storagecluster-ceph-rbd-virtualization   18m

      The PVC is annotated with cdi.kubevirt.io/storage.pod.retainAfterCompletion: "true"

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        annotations:
          cdi.kubevirt.io/storage.bind.immediate.requested: ""
          cdi.kubevirt.io/storage.condition.running: "false"
          cdi.kubevirt.io/storage.condition.running.message: ""
          cdi.kubevirt.io/storage.condition.running.reason: Completed
          cdi.kubevirt.io/storage.contentType: kubevirt
          cdi.kubevirt.io/storage.import.importPodName: importer-prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb
          cdi.kubevirt.io/storage.import.source: none
          cdi.kubevirt.io/storage.pod.phase: Succeeded
          cdi.kubevirt.io/storage.pod.restarts: "0"
          cdi.kubevirt.io/storage.pod.retainAfterCompletion: "true"  <---
          cdi.kubevirt.io/storage.populator.kind: VolumeImportSource
          cdi.kubevirt.io/storage.preallocation.requested: "false"
          pv.kubernetes.io/bind-completed: "yes"
          pv.kubernetes.io/bound-by-controller: "yes"
          sidecar.istio.io/inject: "false"
          volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
          volume.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
        creationTimestamp: "2024-04-29T09:47:59Z"
        finalizers:
        - kubernetes.io/pvc-protection
        labels:
          app: containerized-data-importer
          app.kubernetes.io/component: storage
          app.kubernetes.io/managed-by: cdi-controller
          app.kubernetes.io/part-of: hyperconverged-cluster
          app.kubernetes.io/version: 4.14.5
        name: prime-6c399c50-55d9-4db7-bfa1-2fece2993dfb
        namespace: case-03802286
        ownerReferences:
        - apiVersion: v1
          blockOwnerDeletion: true
          controller: true
          kind: PersistentVolumeClaim
          name: vmware-test-vm-4804-9jd56
          uid: 6c399c50-55d9-4db7-bfa1-2fece2993dfb
        resourceVersion: "58593199"
        uid: 3ab33ddc-eaa8-45e7-b82a-96874663c434
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: "17179869184"
        storageClassName: ocs-storagecluster-ceph-rbd-virtualization
        volumeMode: Block
        volumeName: pvc-3ab33ddc-eaa8-45e7-b82a-96874663c434
      status:
        phase: Lost <----

       

      Expected results:

      No PVCs in Lost state

       

      Additional info:

      I see this pull request to make FEATURE_RETAIN_PRECOPY_IMPORTER_PODS disabled by default. I think that should fix this problem.

       

            lrotenbe Liran Rotenberg
            rhn-support-jortialc Juan Orti
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated:
              Resolved: