Uploaded image for project: 'Migration Toolkit for Virtualization'
  1. Migration Toolkit for Virtualization
  2. MTV-2243

DP: Add storage offload to to MTV UI

XMLWordPrintable

    • Add xcopy populator to MTV UI
    • Product / Portfolio Work
    • False
    • Hide

      None

      Show
      None
    • True
    • Not Selected
    • Done
    • VIRTSTRAT-49 - Storage offloaded VM migration for block
    • VIRTSTRAT-49Storage offloaded VM migration for block
    • 60% To Do, 20% In Progress, 20% Done

      Vphere's xcopy volume populator is a new experimental copy optimization method.

      It should be behind a feature gate, and it needs to be opt-in, probably at the plan level, and

      also it is not supported in all cases.( See below)

       

      At the moment the way to trigger an xcopy volume population(on the devel branch https://github.com/rgolangh/forklift/tree/vsphere-xcopy-volume-populator) is to annotate the target plan storage class with "copy-offload: true" . obviously this is not enough, we need far more details, like which data-stores are supported (iscsi or FC) and what is their storage vendors.

      The UI behaviour needs to query some kind of a mapping object (a config map?) that will determine if a certain storage mapping could be selected to use "xcopy" and then that should be presented to the user to select.

      Caveat with storage mapping here - xcopy depends on the disk being iSCSI or FC in vsphere, and the storage class should be connected to the same storage system. The mapping  itself is not per-disk, so there is no control over which disks should or should not use that method, if its available. in other words - if selected at the plan level, all disks which could be xcopied , would be xcopied

              mschatzm@redhat.com Matan Schatzman
              rgolan1@redhat.com Roy Golan
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: