Uploaded image for project: 'Migration Toolkit for Virtualization'
  1. Migration Toolkit for Virtualization
  2. MTV-1238

Prime PVCs in Lost state stay after VM migration

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • None
    • None
    • None
    • False
    • None
    • False

      Component versions:

      • OCP version 4.14.11
      • MTV operator version 2.6.2
      • OpenShift Virtualization version 4.14.6

      After migrating a virtual machine from VMware to OpenShift, a prime PVC remains in the target namespace. This PVC is in a Lost state:

      $ oc get pvc
      NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                                 AGE
      migration3-vm-18928-jzjrz                    Bound    pvc-44ba8d33-6370-4aa3-bccb-005f9c8ec2d0   10Gi       RWX            ocs-storagecluster-ceph-rbd-virtualization   18h
      prime-9b7dbbd7-7893-4088-86b6-0ccbbaa4403c   Lost     pvc-44ba8d33-6370-4aa3-bccb-005f9c8ec2d0   0                         ocs-storagecluster-ceph-rbd-virtualization   18h 
      $ oc get pvc prime-9b7dbbd7-7893-4088-86b6-0ccbbaa4403c -o yaml 
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        annotations:
          cdi.kubevirt.io/storage.bind.immediate.requested: ""
          cdi.kubevirt.io/storage.condition.running: "false"
          cdi.kubevirt.io/storage.condition.running.message: ""
          cdi.kubevirt.io/storage.condition.running.reason: Completed
          cdi.kubevirt.io/storage.contentType: kubevirt
          cdi.kubevirt.io/storage.import.importPodName: importer-prime-9b7dbbd7-7893-4088-86b6-0ccbbaa4403c
          cdi.kubevirt.io/storage.import.source: none
          cdi.kubevirt.io/storage.pod.phase: Succeeded
          cdi.kubevirt.io/storage.pod.restarts: "0"
          cdi.kubevirt.io/storage.pod.retainAfterCompletion: "true"
          cdi.kubevirt.io/storage.populator.kind: VolumeImportSource
          cdi.kubevirt.io/storage.preallocation.requested: "false"
          pv.kubernetes.io/bind-completed: "yes"
          pv.kubernetes.io/bound-by-controller: "yes"
          sidecar.istio.io/inject: "false"
          volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
          volume.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
        creationTimestamp: "2024-07-02T21:47:58Z"
        finalizers:
        - kubernetes.io/pvc-protection
        labels:
          app: containerized-data-importer
          app.kubernetes.io/component: storage
          app.kubernetes.io/managed-by: cdi-controller
          app.kubernetes.io/part-of: hyperconverged-cluster
          app.kubernetes.io/version: 4.14.6
        name: prime-9b7dbbd7-7893-4088-86b6-0ccbbaa4403c
        namespace: anosek-migration3
        ownerReferences:
        - apiVersion: v1
          blockOwnerDeletion: true
          controller: true
          kind: PersistentVolumeClaim
          name: migration3-vm-18928-jzjrz
          uid: 9b7dbbd7-7893-4088-86b6-0ccbbaa4403c
        resourceVersion: "36051219"
        uid: 44ba8d33-6370-4aa3-bccb-005f9c8ec2d0
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: "10737418240"
        storageClassName: ocs-storagecluster-ceph-rbd-virtualization
        volumeMode: Block
        volumeName: pvc-44ba8d33-6370-4aa3-bccb-005f9c8ec2d0
      status:
        phase: Lost 

      I would like the MTV tool to clean up the Lost PVCs. There is a closed CNV-31071 describing the same problem.

      A PVC in a Lost state is confusing to the user who may think that there is an issue with the storage provider. Normally, we see volumes in Bound state. Moreover, the OADP backup tool won’t handle Lost PVCs gracefully. Backups of namespaces that include Lost PVCs end in PartiallyFailed status.

      There is a variable FEATURE_RETAIN_PRECOPY_IMPORTER_PODS which if set to false prevents adding the annotation cdi.kubevirt.io/storage.pod.retainAfterCompletion to the datavolume object. This variable is set to true by default. Unfortunately, this variable doesn’t work due to probably an incorrect conditional expression:

      if !r.Plan.Spec.Warm || Settings.RetainPrecopyImporterPods { 

      should probably be corrected to:

      if !r.Plan.Spec.Warm && Settings.RetainPrecopyImporterPods { 

      The code above was introduced by commit 25f6228 that was fixing the Bug 2016290.

      In summary, the issue with the Lost PVCs can probably be solved in two steps:

      1. Fix the above conditional expression.
      2. Set FEATURE_RETAIN_PRECOPY_IMPORTER_PODS = false by default.

            ahadas@redhat.com Arik Hadas
            anosek@redhat.com Ales Nosek
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: