Uploaded image for project: 'Migration Toolkit for Virtualization'
  1. Migration Toolkit for Virtualization
  2. MTV-3709

[TP] Provide an option to customize the ephemeral storage used during migration

XMLWordPrintable

    • Provide an option to customize the ephemeral storage used during migration
    • Future Sustainability
    • False
    • Hide

      None

      Show
      None
    • False
    • Not Selected
    • To Do
    • MTV-3695[RFE] Provide an option to customize the ephemeral storage used during migration
    • 33% To Do, 0% In Progress, 67% Done
    • Hide
      Technology Preview: Customizing ephemeral storage for improved migration stability::

      In {project-short} 2.11.0, you can customize ephemeral storage during migration, preventing failures due to insufficient node storage in large-scale migrations, Open Virtualization Format (OVA) imports, and nodes with limited storage. Users can define a `storageClass` for temporary storage, enabling `virt-v2v` pods to mount a temporary Persistent Volume Claim (PVC) on the defined `storageClass`, improving migration stability and success. This update reduces the likelihood of migration failures caused by limited node storage.
      +
      link:https://issues.redhat.com/browse/MTV-3709[MTV-3709]
      Show
      Technology Preview: Customizing ephemeral storage for improved migration stability:: In {project-short} 2.11.0, you can customize ephemeral storage during migration, preventing failures due to insufficient node storage in large-scale migrations, Open Virtualization Format (OVA) imports, and nodes with limited storage. Users can define a `storageClass` for temporary storage, enabling `virt-v2v` pods to mount a temporary Persistent Volume Claim (PVC) on the defined `storageClass`, improving migration stability and success. This update reduces the likelihood of migration failures caused by limited node storage. + link: https://issues.redhat.com/browse/MTV-3709 [ MTV-3709 ]
    • Enhancement
    • Done

      As a migration admin I want to be able to configure the ephemeral storage used during the migration.

      Motivation

      Currently the virt-v2v pods use the ephemeral storage available on the node during the conversion. There is no way to customize the pods to use another storage provider in case the nodes have limited storage. Once the node storage fills up the pods get evicted due to storage pressure on the node, causing the migration to fail.

      A common scenario is an XLarge VM migration - i.e a VM with 10+TB of disk:

      • virt-v2v starts running fstrim -v /sysroot/ on the guest disk 
      • this happens on an overlay by default, creating an overlay on the temporary storage
      • This overlay can and will be very large for very large disks, final size depending on the actual structure of the partition / data.

      Another likely scenario is the OVA import feature

      • virt-v2v -i ova requires a full copy of the uncompressed source disks in the temporary storage for compressed images. Pods converting such a large-ish OVA also have a high chance of being evicted on a storage-limited node. 

      Proposed Solution

      Provide an option to define a StorageClass to be used as the temporary storage. This option can either be 

      • a global one in ForkliftController CR - i.e: 

      virt_v2v_container_temp_storageclass: lvms-vg1

      virt_v2v_container_temp_storagesize: 30Gi

      • or preferably a per-plan option in the planSpec - i.e:

      convertorTempStorageClass: lvms-vg1

      convertorTempStorageSize: 30Gi

      These settings will force the virt-v2v pod(s) to mount a temporary PVC on the defined storageClass as the pod's ephemeral volume.

      One option for these temporary PVCs would be to use Generic Ephemeral Volumes 

              gcasey@redhat.com Gwendolyn Casey
              emeroglu@redhat.com Ekin Meroglu
              Chenli Hu Chenli Hu
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: