Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Migration Toolkit for Virtualization 2.5
You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers:
-
VMware vSphere
-
Red Hat Virtualization (RHV)
-
OpenStack
-
Open Virtual Appliances (OVAs) that were created by VMware vSphere
-
Remote OpenShift Virtualization clusters
The release notes describe technical changes, new features and enhancements, and known issues for Migration Toolkit for Virtualization.
Technical changes
This release has the following technical changes:
In this version of MTV, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.
Enterprise Master Secret (EMS) enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by MTV but do not comply with the 2023 FIPS requirements.
The user interface of create and update providers now aligns with the look and feel of the Red Hat OpenShift web console and displays up-to-date data.
The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true
in ForkliftController anymore.
In previous releases of MTV 2.5, populator pods were always restarted on failure. This made it difficult to gather the logs from the failed pods. In MTV 2.5.3, the number of restarts of populator pods is limited to three times. On the third and final time, the populator pod remains in the fail status and its logs can then be easily gathered by must-gather and by forklift-controller to know this step has failed. (MTV-818)
New features and enhancements
This release has the following features and improvements:
In MTV 2.5, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere. |
Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.
Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
In MTV 2.5, you can now use Red Hat OpenShift Virtualization provider as a source provider and destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on. (MTV-571)
During the migration from Red Hat Virtualization (RHV), direct Logical Units (LUNs) are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)
In addition to standard password authentication, MTV supports the following authentication methods: Token authentication and Application credential authentication. (MTV-539)
The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)
You can now create the VMware vSphere source provider without specifying a VMware Virtual Disk Development Kit (VDDK) init
image. It is strongly recommended to create a VDDK init
image to accelerate migrations.
In MTV 2.5.3, deployment on OpenShift Kubernetes Engine (OKE) has been enabled. For more information, see About OpenShift Kubernetes Engine. (MTV-803)
In MTV 2.5.4, migration of VMs to destination storage classes that have encrypted RADOS Block Devices (RBD) volumes is now supported.
To make use of this new feature, set the value of the parameter controller_block_overhead
to 1Gi
, following the procedure in Configuring the MTV Operator. (MTV-851)
Known issues
This release has the following known issues:
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
vSphere only: Migrations from RHV and OpenStack do not fail, but the encryption key may be missing on the target Red Hat OpenShift cluster.
Warm migration from RHV fails if the user performs a snapshot operation on the source VM. The migration does not wait for a snapshot operation on the source VM to finish when it is scheduled at the same time as the migration. (MTV-456)
When migrating a VM with multiple disks to more than one storage classes of type hostPath
, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target Red Hat OpenShift cluster.
Warm migrations and migrations to remote Red Hat OpenShift clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local Red Hat OpenShift cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491)
When adding an OVA provider, the error message ConnectionTestFailed
can instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready
, this means that the ova server pod creation
has failed. (MTV-671)
ovirtvolumepopulator
from failed migration causes plan to stay indefinitely in CopyDisks
phaseAn outdated ovirtvolumepopulator
in the namespace, left over from an earlier failed migration, stops a new plan of the same VM when it transitions to CopyDisks
phase. The plan remains in that phase indefinitely. (MTV-929)
The migration fails to build the Persistent Volume Claim (PVC) if the destination storage class does not have a configured storage profile. The Forklift-controller raises an error message without a clear reason for not creating a PVC. (MTV-928)
For a complete list of all known issues in this release, see the list of Known Issues in Jira.
Resolved issues
This release has the following resolved issues:
Versions of the package jsrsasign
before 11.0.0, used in earlier releases of MTV, are vulnerable to Observable Discrepancy in the RSA PKCS1.5 or RSA-OAEP decryption process. This discrepancy means an attacker could decrypt ciphertexts by exploiting this vulnerability. However, exploiting this vulnerability requires the attacker to have access to a large number of ciphertexts encrypted with the same key. This issue has been resolved in MTV 2.5.5 by upgrading the package jsrasign
to version 11.0.0.
For more information, see CVE-2024-21484.
A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.
This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).
Context.FileAttachment
functionA flaw was found in the Gin-Gonic Gin Web Framework, used by MTV. The filename parameter of the Context.FileAttachment
function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment
function. A maliciously created filename could cause the Content-Disposition
header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition
header.
This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-29401 (Gin-Gonic Gin Web Framework) and CVE-2023-26125.
A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)
This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-26144.
A flaw was found in otelhttp handler
of OpenTelemetry-Go. This flaw means MTV versions before MTV 2.5.3 are vulnerable to a memory leak caused by http.user_agent
and http.method
having unbound cardinality, which could allow a remote, unauthenticated attacker to exhaust the server’s memory by sending many malicious requests, affecting the availability. (MTV-795)
This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-45142.
A flaw was found in Golang. This flaw means MTV versions before MTV 2.5.3 are vulnerable to QUIC connections not setting an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. With the fix, connections now consistently reject messages larger than 65KiB in size. (MTV-708)
This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-39322.
A flaw was found in Golang. This flaw means MTV versions before MTV 2.5.3 are vulnerable to processing an incomplete post-handshake message for a QUIC connection, which causes a panic. (MTV-693)
This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-39321.
A flaw was found in the Golang html/template
package used in MTV. This flaw means MTV versions before MTV 2.5.3 are vulnerable, as the html/template
package did not properly handle occurrences of <script
, <!--
, and </script
within JavaScript literals in <script>
contexts. This flaw could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-39319.
A flaw was found in the Golang html/template
package used in MTV. This flaw means MTV versions before MTV 2.5.3 are vulnerable as the html/template
package did not properly handle HMTL-like ""
comment tokens, nor hashbang \#!
comment tokens. This flaw could cause the template parser to improperly interpret the contents of <script>
contexts, causing actions to be improperly escaped, which could be leveraged to perform an XSS
attack. (MTV-693)
This issue has been resolved in MTV 2.5.3. It is advised to update to this version of MTV or later.
For more information, see CVE-2023-39318.
In earlier releases of MTV 2.5, the log files downloaded from UI could contain logs that are related to an earlier migration plan. (MTV-783)
This issue has been resolved in MTV 2.5.3.
In earlier releases of MTV 2.5, the size of disks that are extended in RHV was not adequately monitored. This resulted in the inability to migrate virtual machines with extended disks from a RHV provider. (MTV-830)
This issue has been resolved in MTV 2.5.3.
In earlier releases of MTV 2.5, the filesystem overhead for new persistent volumes was hard-coded to 10%. The overhead was insufficient for certain filesystem types, resulting in failures during cold-migrations from RHV and OSP to the cluster where MTV is deployed. In other filesystem types, the hard-coded overhead was too high, resulting in excessive storage consumption.
In MTV 2.5.3, the filesystem overhead can be configured and is no longer hard-coded. If your migration allocates persistent volumes without CDI, you can adjust the file system overhead. You adjust the file system overhead by adding the following label and value to the spec
portion of the forklift-controller ` CR
:
spec:
`controller_filesystem_overhead: <percentage>` (1)
1 | The percentage of overhead. If this label is not added, the default value of 10% is used. This setting is valid only if the storageclass is filesystem . (MTV-699) |
In earlier releases of MTV, the create and update provider forms could have presented stale data.
This issue is resolved in MTV 2.5, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)
In earlier releases of MTV, the Migration Controller
service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.
This issue is resolved in MTV 2.5, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)
In earlier releases of MTV, the Migration Controller
service did not delete snapshots automatically after a successful warm migration of a VM from RHV.
This issue is resolved in MTV 2.5, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)
In earlier releases of MTV, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in RHV and therefore the ovirt-engine
rejected the snapshot creation, or disk transfer, operation.
This issue is resolved in MTV 2.5, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)
In earlier releases of MTV, triggering a warm migration while there was an ongoing operation in RHV that locked the VM caused the migration to fail, because it could not trigger the snapshot creation.
This issue is resolved in MTV 2.5, warm migration does not fail when an operation that locks the VM is performed in RHV. The migration does not fail, but starts when the VM is unlocked. (MTV-687)
In earlier releases of MTV, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.
This issue is resolved in MTV 2.5, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)
In earlier releases of MTV, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.
This issue is resolved in MTV 2.5, PVCs are deleted when archiving and deleting migration plan.(MTV-493)
In earlier releases of MTV, VM with multiple disks that were migrated might not have been able to boot on the target Red Hat OpenShift cluster.
This issue is resolved in MTV 2.5, VM with multiple disks that are migrated are able to boot on the target Red Hat OpenShift cluster. (MTV-433)
In MTV releases 2.4.0-2.5.3, cold migrations from vSphere to the local cluster on which MTV was deployed did not take a specified transfer network into account. This issue is resolved in MTV 2.5.4. (MTV-846)
For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.
Upgrade notes
It is recommended to upgrade from MTV 2.4.2 to MTV 2.5.
When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller
is immutable. Workaround: Remove the custom resource forklift-controller
of type ForkliftController
from the installed namespace, and recreate it. Refresh the Red Hat OpenShift console once the forklift-console-plugin
pod runs to load the upgraded MTV web console. (MTV-518)
Migration Toolkit for Virtualization 2.4
Migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization or OpenStack to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
Disk images are not converted anymore using virt-v2v when migrating from RHV. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)
Disk transfers use ovirt-imageio
client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.
When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.
The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.
You must update the StorageProfile
resource with accessModes
and volumeMode
for non-provisioner storage classes such as NFS.
Previous versions of MTV supported only using VDDK version 7 for the VDDK image. MTV supports both versions 7 and 8, as follows:
-
If you are migrating to OCP 4.12 or earlier, use VDDK version 7.
-
If you are migrating to OCP 4.13 or later, use VDDK version 8.
New features and enhancements
This release has the following features and improvements:
MTV now supports migrations with OpenStack as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.
The Migration Toolkit for Virtualization Operator now integrates the MTV web console into the Red Hat OpenShift web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration
to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true
in ForkliftController. (MTV-427)
Skip certificate validation option was added to VMware and RHV providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.
Only the third-party certificate needs to be specified when defining a RHV provider that sets with the Manager CA certificate.
Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)
Known issues
This release has the following known issues:
Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)
The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)
If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
vSphere only: Migrations from RHV and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.
The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.
The Migration Controller service does not delete snapshots automatically after a successful warm migration of a RHV VM. Workaround: Snapshots can be removed from RHV instead. (MTV-349)
Some warm migrations from RHV might fail. When running a migration plan for warm migration of multiple VMs from RHV, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.
Warm migration from RHV fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)
When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.
When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)
When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)
VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)
Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.
When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)
When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller
is immutable. Workaround: remove the custom resource forklift-controller
of type ForkliftController
from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin
pod runs to load the upgraded MTV web console. (MTV-518)
Resolved issues
This release has the following resolved issues:
A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.
This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.
For more information, see CVE-2023-44487 (Rapid Reset Attack) and CVE-2023-39325 (Rapid Reset Attack).
Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)
If a user specifies an incorrect password for RHV providers, they are no longer locked in RHV. An error returns when the RHV manager is accessible and adding the provider. If the RHV manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)
Previously, the cluster-admin
role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)
Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)
The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from RHV. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of RHV. (MTV-597)
Previously, the password that was specified for RHV manager appeared in error messages that were displayed in the web console and logs when failing to connect to RHV. In this release, error messages that are generated when failing to connect to RHV do not reveal the password for RHV manager.
The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)
Migration Toolkit for Virtualization 2.3
You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider
CR for VMware migrations.
You must update the StorageProfile
resource with accessModes
and volumeMode
for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.
New features and enhancements
This release has the following features and improvements:
You can use warm migration to migrate VMs from both VMware and RHV.
VMware users do not have to have full cluster-admin
privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.
MTV documentation includes instructions on adding hooks to migration plans and running hooks on VMs.
Known issues
This release has the following known issues:
When you run a migration plan for warm migration of multiple VMs from RHV, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)
The Migration Controller service does not delete snapshots automatically after a successful warm migration of a RHV VM. You can delete the snapshots manually. (BZ#22053183)
If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
Deleting a migration plan does not remove temporary resources such as importer
pods, conversion
pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.
The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)
If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
The problem occurs for both vSphere and RHV migrations.
Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)
Migration Toolkit for Virtualization 2.2
You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
The release notes describe technical changes, new features and enhancements, and known issues.
Technical changes
This release has the following technical changes:
You can set the time interval between snapshots taken during the precopy stage of warm migration.
New features and enhancements
This release has the following features and improvements:
You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory
service and written in Rego, the Open Policy Agent native query language.
You can download logs for a migration plan or a migrated VM by using the MTV web console.
You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.
You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.
Known issues
This release has the following known issues:
Certain Validation
service issues, which are marked as Critical
and display the assessment text, The VM will not be migrated
, do not block migration. (BZ#2025977)
The following Validation
service assessments do not block migration:
Assessment | Result |
---|---|
The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported). |
The migrated VM will have a virtio disk if the source interface is not recognized. |
The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported). |
The migrated VM will have a virtio NIC if the source interface is not recognized. |
The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization. |
The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly. |
One or more of the VM’s disks has an illegal or locked status condition. |
The migration will proceed but the disk transfer is likely to fail. |
The VM has a disk with a storage type other than |
The migration will proceed but the disk transfer is likely to fail. |
The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization. |
The migration will proceed but the disk transfer is likely to fail. |
The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization. |
The migrated VM will not have USB devices. |
The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization. |
The migrated VM will not have a watchdog device. |
The VM’s status is not |
The migration will proceed but it might hang if the VM cannot be powered off. |
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
If a resource does not exist, for example, if the virt-launcher
pod does not exist because the migrated VM is powered off, its log is unavailable.
The following error appears in the missing resource’s current.log
file when it is downloaded from the web console or created with the must-gather
tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
(BZ#2023260)
Retaining the importer
pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)
As a temporary workaround, the importer
pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer
pod log is not retained after warm migration is complete. You can only view the importer
pod log by using the oc logs -f <cdi-importer_pod>
command during the precopy stage.
This issue only affects the importer
pod log and warm migration. Cold migration and the virt-v2v
logs are not affected.
Deleting a migration plan does not remove temporary resources such as importer
pods, conversion
pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.
The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)
Plan
CR are not displayed in the web console.If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the MTV web console. The migration plan cannot be edited or duplicated. (BZ#1986020)
If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the MTV web console might include the logs of the deleted migration plan or VM. (BZ#2023764)
Succeeded
in the Plan
CRIf you delete a target VirtualMachine
CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found
. However, the status of the VM migration is Succeeded
in the Plan
CR file and in the web console. (BZ#2031529)
Migration Toolkit for Virtualization 2.1
You can migrate virtual machines (VMs) from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
The release notes describe new features and enhancements, known issues, and technical changes.
Technical changes
HyperConverged
custom resourceThe VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged
custom resource. Before this release, it was referenced in the v2v-vmware
config map.
New features and enhancements
This release adds the following features and improvements.
You can perform a cold migration of VMs from Red Hat Virtualization.
You can create migration hooks to run Ansible playbooks or custom code before or after migration.
must-gather
data collectionYou can specify options for the must-gather
tool that enable you to filter the data by namespace, migration plan, or VMs.
You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the OpenShift Virtualization environment has an SR-IOV network.
Known issues
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
The disk copy stage of a RHV VM does not progress and the MTV web console does not display an error message. (BZ#1990596)
The cause of this problem might be one of the following conditions:
-
The storage class does not exist on the target cluster.
-
The VDDK image has not been added to the
HyperConverged
custom resource. -
The VM does not have a disk.
-
The VM disk is locked.
-
The VM time zone is not set to UTC.
-
The VM is configured for a USB device.
To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.
To determine the cause:
-
Click Workloads → Virtualization in the Red Hat OpenShift web console.
-
Click the Virtual Machines tab.
-
Select a virtual machine to open the Virtual Machine Overview screen.
-
Click Status to view the status of the virtual machine.
The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time
after first assessing the potential impact on the workload. (BZ#1993259)
If a RHV resource UUID is used in a Host
, NetworkMap
, StorageMap
, or Plan
custom resource (CR), a "Provider not found" error is displayed.
You must use the resource name. (BZ#1994037)
If a RHV resource name is used in a NetworkMap
, StorageMap
, or Plan
custom resource (CR) and if the same resource name exists in another data center, the Plan
CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.
In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)
Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)
Migration Toolkit for Virtualization 2.0
You can migrate virtual machines (VMs) from VMware vSphere with the Migration Toolkit for Virtualization (MTV).
The release notes describe new features and enhancements, known issues, and technical changes.
New features and enhancements
This release adds the following features and improvements.
Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.
You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.
You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the Red Hat OpenShift pod network.
The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.
The validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Known issues
This section describes known issues and mitigations.
The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)
If the network map remains in a NotReady
state and the NetworkMap
manifest displays a Destination network not found
error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)
Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)
You can do one of the following to mitigate this issue:
-
Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.
-
Increase the snapshot interval in the
vm-import-controller-config
config map to720
minutes:$ oc patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "720"}}'