Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-33710

[2242306] [AWS] Unable to migrate VM's using px-csi-db-shared storage - Unsafe migration: Migration without shared storage is unsafe

XMLWordPrintable

    • Urgent

      Description of problem:
      VM using px-csi-db-shared storage PVC can't Live Migrate. Getting error Unsafe migration: Migration without shared storage is unsafe.

      Version-Release number of selected component (if applicable):
      4.14

      How reproducible:
      100%

      Steps to Reproduce:
      1.Create VM with DV/PVC on px-csi-db-shared storage
      2.Start VM
      3.Migrate VM

      Actual results:
      Live Migration fails

      Expected results:
      Live Migration succeeds

      Additional info:
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal SuccessfulCreate 65m disruptionbudget-controller Created PodDisruptionBudget kubevirt-disruption-budget-jqlcx
      Normal SuccessfulCreate 65m virtualmachine-controller Created virtual machine pod virt-launcher-fedora-continuous-gopher-c2xtk
      Normal Created 65m virt-handler VirtualMachineInstance defined.
      Normal Started 65m virt-handler VirtualMachineInstance started.
      Normal PreparingTarget 64m virt-handler Migration Target is listening at 10.129.2.49, on ports: 41433,38365,33377
      Warning Migrated 64m virt-handler VirtualMachineInstance migration uid 884b7c08-b51b-4516-ae96-21198fb15a79 failed. reason:Live migration failed error encountered during MigrateToURI3 libvirt api call: virError(Code=81, Domain=10, Message='Unsafe migration: Migration without shared storage is unsafe')

      W/A for this is to allow unsafe migration in HCO/KV
      $ oc -n openshift-cnv get hco kubevirt-hyperconverged -o json | jq .metadata.annotations
      {
      "kubevirt.kubevirt.io/jsonpatch": "[{ \"op\": \"add\",\"path\": \"/spec/configuration/migrations\",\"value\": {\"unsafeMigrationOverride\": true}}]"
      }
      $ oc -n openshift-cnv get kv kubevirt-kubevirt-hyperconverged -o json | jq .spec.configuration.migrations.unsafeMigrationOverride
      true

      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Normal SuccessfulCreate 65m disruptionbudget-controller Created PodDisruptionBudget kubevirt-disruption-budget-jqlcx
      Normal SuccessfulCreate 65m virtualmachine-controller Created virtual machine pod virt-launcher-fedora-continuous-gopher-c2xtk
      Normal Created 65m virt-handler VirtualMachineInstance defined.
      Normal Started 65m virt-handler VirtualMachineInstance started.
      Normal PreparingTarget 64m virt-handler Migration Target is listening at 10.129.2.49, on ports: 41433,38365,33377
      Warning Migrated 64m virt-handler VirtualMachineInstance migration uid 884b7c08-b51b-4516-ae96-21198fb15a79 failed. reason:Live migration failed error encountered during MigrateToURI3 libvirt api call: virError(Code=81, Domain=10, Message='Unsafe migration: Migration without shared storage is unsafe')
      Normal SuccessfulUpdate 22s (x2 over 64m) virtualmachine-controller Expanded PodDisruptionBudget kubevirt-disruption-budget-jqlcx
      Normal PreparingTarget 18s (x4 over 64m) virt-handler VirtualMachineInstance Migration Target Prepared.
      Normal Migrating 18s (x2 over 64m) virt-handler VirtualMachineInstance is migrating.
      Normal PreparingTarget 18s virt-handler Migration Target is listening at 10.129.2.49, on ports: 36541,33947,44747
      Normal Migrated 13s virt-handler The VirtualMachineInstance migrated to node ip-10-0-74-151.us-east-2.compute.internal.
      Normal Deleted 13s virt-handler Signaled Deletion
      Normal SuccessfulUpdate 9s (x2 over 64m) disruptionbudget-controller shrank PodDisruptionBudget kubevirt-disruption-budget-jqlcx

            alitke@redhat.com Adam Litke
            vsibirsk Vasiliy Sibirskiy
            Kedar Bidarkar Kedar Bidarkar
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: