Uploaded image for project: 'Migration Toolkit for Virtualization'
  1. Migration Toolkit for Virtualization
  2. MTV-2970

xcopy migration of a dual disk VM fails, if disks reside on different datastores

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • False

      Description of problem:

      When I try to copyoffload migrate a VM which has 2 disks on DS3 datastore plus another disk on DS2 datastore the migration fails. I suspect the issue is caused due to mixing source datastores.
      
      The fact that both of these datastores point to separate netapps could also be related but less likely. As I'd managed to migrate other VMs with multi disks from either datastore/netapp as long as all the disks resides on the same datastore things worked. 

      Version-Release number of selected component (if applicable):

      2.9.0

      How reproducible:

      Always 
      

      Steps to Reproduce:

      1. Create a VM with two (or more disks), each of the disks should reside on a separate vaai supported datastore, in my two disks on eco-iscsi-ds3  plus one more disk on eco-iscsi-ds2. 
      
      2. Try to migrate the VM using copy offload, migration fails
      
      Both populator pods for disks on eco-iscsi-ds3 failed, the third disk from DS2 it's populator did manage to reach 100%.
      
      W0720 09:23:16.073229       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
      I0720 09:23:16.216938       1 vsphere-xcopy-volume-populator.go:163] the volume name is not found on the claim "test500mixedda-tshefi500gb-8d35d7ca". Trying the prime pvc "prime-9312804a-7f9c-4a4c-986c-5f1f2d012022"
      I0720 09:23:16.220779       1 vsphere-xcopy-volume-populator.go:180] target volume pvc-694ca4bf-8678-4564-b8b0-abd625de9bac volumeHandle pvc-694ca4bf-8678-4564-b8b0-abd625de9bac
      I0720 09:23:16.635200       1 remote_esxcli.go:70] Starting to populate using remote esxcli vmkfstools, source vmdk [eco-iscsi-ds3] tshefi500gb/tshefi500gb_1.vmdk target LUN trident_edge115_pvc_694ca4bf_8678_4564_b8b0_abd625de9bac
      I0720 09:23:16.635775       1 vsphere-xcopy-volume-populator.go:284] Staring metrics server
      found vm VirtualMachine:vm-118057 @ /Eco-Datacenter/vm/tshefi-vms/tshefi500gb
      I0720 09:23:16.654793       1 remote_esxcli.go:78] Got ESXi host: HostSystem:host-3078
      I0720 09:23:16.654886       1 vib.go:27] ensuring vib version on ESXi : 0.1.1
      I0720 09:23:16.660924       1 client.go:54] about to run esxcli command [software vib get -n vmkfstools-wrapper]
      I0720 09:23:20.323812       1 client.go:67] esxcli result message [], status []
      I0720 09:23:20.323843       1 vib.go:115] reply from get vib [map[AcceptanceLevel:[CommunitySupported] CreationDate:[2025-06-23] Description:[Custom VIB to wrap vmkfstools as esxcli plugin] ID:[REDHAT_bootbank_vmkfstools-wrapper_0.1.1] InstallDate:[2025-06-23] LiveInstallAllowed:[True] LiveRemoveAllowed:[True] MaintenanceModeRequired:[False] Name:[vmkfstools-wrapper] Overlay:[False] Payloads:[payload1] Platforms:[host] ReferenceURLs:[website|https://redhat.com] StatelessReady:[True] Status:[] Summary:[Custom VIB to wrap vmkfstools as esxcli plugin] Type:[bootbank] Vendor:[REDHAT] Version:[0.1.1]]]
      I0720 09:23:20.323886       1 vib.go:34] current vib version on ESXi : 0.1.1
      I0720 09:23:20.330556       1 client.go:54] about to run esxcli command [storage core adapter list]
      I0720 09:23:20.361153       1 client.go:67] esxcli result message [], status []
      I0720 09:23:20.361172       1 client.go:67] esxcli result message [], status []
      I0720 09:23:20.361175       1 client.go:67] esxcli result message [], status []
      I0720 09:23:20.361187       1 remote_esxcli.go:94] Adapter [0]: map[Description:[(0000:5c:00.0) Microsemi HPE E208i-a SR Gen10] Driver:[smartpqi] HBAName:[vmhba0] LinkState:[link-n/a] UID:[sas.51402ec014734880]]
      I0720 09:23:20.361207       1 remote_esxcli.go:96]   Description: [(0000:5c:00.0) Microsemi HPE E208i-a SR Gen10]
      I0720 09:23:20.361215       1 remote_esxcli.go:96]   Driver: [smartpqi]
      I0720 09:23:20.361220       1 remote_esxcli.go:96]   HBAName: [vmhba0]
      I0720 09:23:20.361225       1 remote_esxcli.go:96]   LinkState: [link-n/a]
      I0720 09:23:20.361230       1 remote_esxcli.go:96]   UID: [sas.51402ec014734880]
      I0720 09:23:20.361234       1 remote_esxcli.go:94] Adapter [1]: map[Capabilities:[Second Level Lun ID] Description:[iSCSI Software Adapter] Driver:[iscsi_vmk] HBAName:[vmhba64] LinkState:[online] UID:[iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64]]
      I0720 09:23:20.361250       1 remote_esxcli.go:96]   Capabilities: [Second Level Lun ID]
      I0720 09:23:20.361259       1 remote_esxcli.go:96]   Description: [iSCSI Software Adapter]
      I0720 09:23:20.361263       1 remote_esxcli.go:96]   Driver: [iscsi_vmk]
      I0720 09:23:20.361267       1 remote_esxcli.go:96]   HBAName: [vmhba64]
      I0720 09:23:20.361272       1 remote_esxcli.go:96]   LinkState: [online]
      I0720 09:23:20.361276       1 remote_esxcli.go:96]   UID: [iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64]
      I0720 09:23:20.361279       1 remote_esxcli.go:94] Adapter [2]: map[Capabilities:[Second Level Lun ID] Description:[() FC(virtual)] Driver:[scini] HBAName:[vmhba65] LinkState:[link-up] UID:[fc.9078563412efcdab:90786f5e4d3c2b1a]]
      I0720 09:23:20.361290       1 remote_esxcli.go:96]   Capabilities: [Second Level Lun ID]
      I0720 09:23:20.361297       1 remote_esxcli.go:96]   Description: [() FC(virtual)]
      I0720 09:23:20.361301       1 remote_esxcli.go:96]   Driver: [scini]
      I0720 09:23:20.361306       1 remote_esxcli.go:96]   HBAName: [vmhba65]
      I0720 09:23:20.361323       1 remote_esxcli.go:96]   LinkState: [link-up]
      I0720 09:23:20.361335       1 remote_esxcli.go:96]   UID: [fc.9078563412efcdab:90786f5e4d3c2b1a]
      I0720 09:23:20.361341       1 remote_esxcli.go:122] Storage Adapter UID: iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64 (Driver: iscsi_vmk)
      I0720 09:23:20.361345       1 remote_esxcli.go:122] Storage Adapter UID: fc.9078563412efcdab:90786f5e4d3c2b1a (Driver: scini)
      W0720 09:23:20.648288       1 ontap.go:46] failed adding host to igroup [POST /protocols/san/igroups/{igroup.uuid}/initiators][400] igroup_initiator_create default  &{Error:0xc000698ea0}
      I0720 09:23:20.748280       1 vsphere-xcopy-volume-populator.go:128] channel quit lun response is nil
      F0720 09:23:20.748319       1 vsphere-xcopy-volume-populator.go:130] lun response is nil
      
      The second failing populator logs looks similar same reason at the end:
      W0720 09:23:11.992795       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
      I0720 09:23:12.141989       1 vsphere-xcopy-volume-populator.go:163] the volume name is not found on the claim "test500mixedda-tshefi500gb-e9f678ef". Trying the prime pvc "prime-dae1443f-04e8-4cbe-a49e-6045a9ea252e"
      I0720 09:23:12.146778       1 vsphere-xcopy-volume-populator.go:180] target volume pvc-8db81c71-b620-4fdd-a9ae-dd0539f2e59e volumeHandle pvc-8db81c71-b620-4fdd-a9ae-dd0539f2e59e
      I0720 09:23:12.566599       1 remote_esxcli.go:70] Starting to populate using remote esxcli vmkfstools, source vmdk [eco-iscsi-ds3] tshefi500gb/tshefi500gb.vmdk target LUN trident_edge115_pvc_8db81c71_b620_4fdd_a9ae_dd0539f2e59e
      I0720 09:23:12.566749       1 vsphere-xcopy-volume-populator.go:284] Staring metrics server
      found vm VirtualMachine:vm-118057 @ /Eco-Datacenter/vm/tshefi-vms/tshefi500gb
      I0720 09:23:12.583273       1 remote_esxcli.go:78] Got ESXi host: HostSystem:host-3078
      I0720 09:23:12.583309       1 vib.go:27] ensuring vib version on ESXi : 0.1.1
      I0720 09:23:12.589324       1 client.go:54] about to run esxcli command [software vib get -n vmkfstools-wrapper]
      I0720 09:23:15.403957       1 client.go:67] esxcli result message [], status []
      I0720 09:23:15.403988       1 vib.go:115] reply from get vib [map[AcceptanceLevel:[CommunitySupported] CreationDate:[2025-06-23] Description:[Custom VIB to wrap vmkfstools as esxcli plugin] ID:[REDHAT_bootbank_vmkfstools-wrapper_0.1.1] InstallDate:[2025-06-23] LiveInstallAllowed:[True] LiveRemoveAllowed:[True] MaintenanceModeRequired:[False] Name:[vmkfstools-wrapper] Overlay:[False] Payloads:[payload1] Platforms:[host] ReferenceURLs:[website|https://redhat.com] StatelessReady:[True] Status:[] Summary:[Custom VIB to wrap vmkfstools as esxcli plugin] Type:[bootbank] Vendor:[REDHAT] Version:[0.1.1]]]
      I0720 09:23:15.404056       1 vib.go:34] current vib version on ESXi : 0.1.1
      I0720 09:23:15.415748       1 client.go:54] about to run esxcli command [storage core adapter list]
      I0720 09:23:15.444284       1 client.go:67] esxcli result message [], status []
      I0720 09:23:15.444313       1 client.go:67] esxcli result message [], status []
      I0720 09:23:15.444317       1 client.go:67] esxcli result message [], status []
      I0720 09:23:15.444350       1 remote_esxcli.go:94] Adapter [0]: map[Description:[(0000:5c:00.0) Microsemi HPE E208i-a SR Gen10] Driver:[smartpqi] HBAName:[vmhba0] LinkState:[link-n/a] UID:[sas.51402ec014734880]]
      I0720 09:23:15.444377       1 remote_esxcli.go:96]   Description: [(0000:5c:00.0) Microsemi HPE E208i-a SR Gen10]
      I0720 09:23:15.444382       1 remote_esxcli.go:96]   Driver: [smartpqi]
      I0720 09:23:15.444386       1 remote_esxcli.go:96]   HBAName: [vmhba0]
      I0720 09:23:15.444390       1 remote_esxcli.go:96]   LinkState: [link-n/a]
      I0720 09:23:15.444396       1 remote_esxcli.go:96]   UID: [sas.51402ec014734880]
      I0720 09:23:15.444400       1 remote_esxcli.go:94] Adapter [1]: map[Capabilities:[Second Level Lun ID] Description:[iSCSI Software Adapter] Driver:[iscsi_vmk] HBAName:[vmhba64] LinkState:[online] UID:[iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64]]
      I0720 09:23:15.444432       1 remote_esxcli.go:96]   Description: [iSCSI Software Adapter]
      I0720 09:23:15.444517       1 remote_esxcli.go:96]   Driver: [iscsi_vmk]
      I0720 09:23:15.444530       1 remote_esxcli.go:96]   HBAName: [vmhba64]
      I0720 09:23:15.444535       1 remote_esxcli.go:96]   LinkState: [online]
      I0720 09:23:15.444543       1 remote_esxcli.go:96]   UID: [iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64]
      I0720 09:23:15.444547       1 remote_esxcli.go:96]   Capabilities: [Second Level Lun ID]
      I0720 09:23:15.444551       1 remote_esxcli.go:94] Adapter [2]: map[Capabilities:[Second Level Lun ID] Description:[() FC(virtual)] Driver:[scini] HBAName:[vmhba65] LinkState:[link-up] UID:[fc.9078563412efcdab:90786f5e4d3c2b1a]]
      I0720 09:23:15.444559       1 remote_esxcli.go:96]   HBAName: [vmhba65]
      I0720 09:23:15.444564       1 remote_esxcli.go:96]   LinkState: [link-up]
      I0720 09:23:15.444569       1 remote_esxcli.go:96]   UID: [fc.9078563412efcdab:90786f5e4d3c2b1a]
      I0720 09:23:15.444574       1 remote_esxcli.go:96]   Capabilities: [Second Level Lun ID]
      I0720 09:23:15.444588       1 remote_esxcli.go:96]   Description: [() FC(virtual)]
      I0720 09:23:15.444596       1 remote_esxcli.go:96]   Driver: [scini]
      I0720 09:23:15.444603       1 remote_esxcli.go:122] Storage Adapter UID: iqn.1998-01.com.vmware:ecoesxi07.lab.eng.tlv2.redhat.com:397421518:64 (Driver: iscsi_vmk)
      I0720 09:23:15.444608       1 remote_esxcli.go:122] Storage Adapter UID: fc.9078563412efcdab:90786f5e4d3c2b1a (Driver: scini)
      W0720 09:23:15.753538       1 ontap.go:46] failed adding host to igroup [POST /protocols/san/igroups/{igroup.uuid}/initiators][400] igroup_initiator_create default  &{Error:0xc00084fb60}
      I0720 09:23:15.924057       1 vsphere-xcopy-volume-populator.go:128] channel quit lun response is nil
      F0720 09:23:15.924146       1 vsphere-xcopy-volume-populator.go:130] lun response is nil 

      Actual results:

      Migration fails at Disk Allocation 

      Expected results:

      Migration should complete

      Additional info:

      If we have more than a single disk, even if it resides on a different VM/path but uses same datastore for all disks migration works fine. 
      
      The attached print screen on the right side tshefi500gb VM on vmware notice it happens to have 3 disks, but they are spread on two datastores eco-iscsi-ds2 and eco-iscsi-ds3, on the left side you see the failed migration. 
      Notice on the right side DS2 based disk did migrate, only DS3 based disks failed. 
      
      

       

              rgolan1@redhat.com Roy Golan
              tshefi@redhat.com Tzach Shefi
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: