Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-734

[RFE]Virt-v2v supports converting guest with RDM disk from VMware

    • Medium
    • rhel-sst-virtualization
    • ssg_virtualization
    • 5
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Enhancement
    • None

      Description of problem:
      [RFE]Virt-v2v supports converting guest with RDM disk from VMware

      Version-Release number of selected component (if applicable):
      virt-v2v-2.0.7-6.el9.x86_64
      libguestfs-1.48.4-3.el9.x86_64
      guestfs-tools-1.48.2-7.el9.x86_64
      libvirt-libs-8.9.0-2.el9.x86_64
      qemu-img-7.1.0-5.el9.x86_64
      nbdkit-server-1.30.8-1.el9.x86_64
      libnbd-1.12.6-1.el9.x86_64

      How reproducible:
      90%

      Steps to Reproduce:
      1.Prepare a guest on ESXi7.0.3 host and add a RDM disk to the guest, pls refer to sreenshots 'vmware-guest-with-RDM-disk'

      2.Use virsh to dump the guest libvirtxml from VMware, can see the RDM disk info in guest libvirtxml

      1. virsh -c vpx://root@10.73.227.27/data/10.73.225.34/?no_verify=1 dumpxml esx7.0-rhel9.2-x86_64-with-RDM-disk
        Enter root's password for 10.73.227.27:
        <domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
        <name>esx7.0-rhel9.2-x86_64-with-RDM-disk</name>
        .....
        <devices>
        <disk type='file' device='disk'>
        <source file='[datastore2] esx7.0-rhel9.2-x86_64/esx7.0-rhel9.2-x86_64.vmdk'/>
        <target dev='sda' bus='scsi'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='disk'>
        <source file='[datastore2] esx7.0-rhel9.2-x86_64/esx7.0-rhel9.2-x86_64_2.vmdk'/>
        <target dev='sdb' bus='scsi'/>
        <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        ....
        </domain>

      3.Convert the guest from ESXi7.0.3 by v2v

      1. virt-v2v -ic vpx://root@10.73.227.27/data/10.73.225.34/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0.3 -io vddk-thumbprint=76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21 -ip /home/passwd esx7.0-rhel9.2-x86_64-with-RDM-disk
        [ 0.2] Setting up the source: -i libvirt -ic vpx://root@10.73.227.27/data/10.73.225.34/?no_verify=1 -it vddk esx7.0-rhel9.2-x86_64-with-RDM-disk
        [ 3.1] Opening the source
        nbdkit: vddk[1]: error: VixDiskLib_Open: [datastore2] esx7.0-rhel9.2-x86_64/esx7.0-rhel9.2-x86_64_2.vmdk: Unknown error
        nbdkit: vddk[1]: error: Please verify whether the "thumbprint" parameter (76:75:59:0E:32:F5:1E:58:69:93:75:5A:7B:51:32:C5:D1:6D:F1:21) matches the SHA1 fingerprint of the remote VMware server. Refer to nbdkit-vddk-plugin(1) section "THUMBPRINTS" for details.
        virt-v2v: error: libguestfs error: could not create appliance through
        libvirt.

      Try running qemu directly without libvirt using this environment variable:
      export LIBGUESTFS_BACKEND=direct

      Original error from libvirt: internal error: process exited while
      connecting to monitor: 2022-11-24T07:50:08.231660Z qemu-kvm: -blockdev
      {"driver":"nbd","server":

      {"type":"unix","path":"/tmp/v2v.ofSMud/in1"}

      ,"node-name":"libvirt-2-storage","cache":

      {"direct":false,"no-flush":true}

      ,"auto-read-only":true,"discard":"unmap"}:
      Requested export not available [code=1 int1=-1]

      If reporting bugs, run virt-v2v with debugging enabled and include the
      complete output:

      virt-v2v -v -x [...]

      Actual results:
      Virt-v2v can't convert guest with RDM disk from VMware

      Expected results:
      As above description

      Additional info:

              virt-maint virt-maint
              mxie@redhat.com Ming Xie
              virt-maint virt-maint
              virt-bugs virt-bugs
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

                Created:
                Updated:
                Resolved: