-
Task
-
Resolution: Won't Do
-
Minor
-
None
-
None
-
None
-
None
-
False
-
False
-
In BZ#1957985, the observed performance is counter intuitive. When transferring disks on the VMware management network, the performance is far better if the endpoint is vCenter, instead of ESXi. One would think that connecting to ESXi directly would be more performant.
In both cases, nbdkit connects to the ESXi for the transfer. When vCenter is involved, nbdkit requests a disk access session and uses it to connect to the ESXi. This could mean that the parameters of this session allows a better performance.
So, it would be interesting to setup a test with nbdkit directly from a bare metal host connected to the VMware management network. The test would use nbdkit to access a given disk and copy it with qemu-img.
The following example should open the session with the vCenter server.
$ LD_LIBRARY_PATH=/opt/vmware-vix-disklib-distrib/lib64:$LD_LIBRARY_PATH \ nbdkit --unix /tmp/my_vm.socket vddk \ libdir=/opt/vmware-vix-disklib-distrib \ server=vcenter.example.com \ user=administrator@vsphere.local \ password=+/tmp/vcenter_password \ thumbprint='01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67' \ file="[Datastore] my_vm/my_vm.vmdk' \ vm=moref=312 $ qemu-img create -f qcow2 -b nbd:unix:/tmp/my_vm.socket /tmp/my_vm.qcow2 $ qemu-dd if=/tmp/my_vm.qcow2 of=/dev/null
The nbdkit command can then be changed to connect to the ESXi.
- documents
-
MTV-128 Identify VMware disk transfer best practices
- Closed
- links to