Uploaded image for project: 'OpenShift Migration Toolkit for Containers'
  1. OpenShift Migration Toolkit for Containers
  2. MIG-1749

MTC: VM restarts migration after the MigPlan deletion

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Blocker Blocker
    • MTC 1.8.9
    • MTC 1.8.7
    • pvc-migrate
    • None
    • Critical

      Description of problem:

      VM starts to migrate again when we delete the succeeded MigPlan

      Version-Release number of selected component (if applicable):

      CNV 4.19, MTC 1.8.6

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create a VM, a MigPlan, a MigMigration -> live migrate a VM
      2. Delete the MigPlan
      3. Look at the VMIM
      

      Actual results:

      I have two VMs, both initially Succeeded to migrate.
      
      For VM in test-ns: deleted MigPlan -> new VMIM started, Succeeded
      
      For VM in test-ns2: deleted MigMigration, MigPlan -> new VMIM started, fails (I deleted all VMIMs in this ns2 at some point, but they are recreating)
      
      $ oc get vmim -A
      NAMESPACE   NAME                             PHASE       VMI
      test-ns-2   kubevirt-workload-update-7qfwd   Failed      rhel-9-cyan-clam-69
      test-ns-2   kubevirt-workload-update-drc8z   Failed      rhel-9-cyan-clam-69
      test-ns-2   kubevirt-workload-update-m5hrx   Pending     rhel-9-cyan-clam-69
      test-ns-2   kubevirt-workload-update-mn7kn   Failed      rhel-9-cyan-clam-69
      test-ns-2   kubevirt-workload-update-smfl9   Failed      rhel-9-cyan-clam-69
      test-ns-2   kubevirt-workload-update-w4zns   Failed      rhel-9-cyan-clam-69
      test-ns     kubevirt-workload-update-4tlbd   Succeeded   fedora-scarlet-moose-83
      test-ns     kubevirt-workload-update-l6xq2   Succeeded   fedora-scarlet-moose-83
      

      Expected results:

      VM does not migrate after the MigPlan is deleted

      Additional info:

      VMIM and VM after migration restarted:

      $ oc get vmim -n test-ns-2 kubevirt-workload-update-7qfwd -oyaml
      apiVersion: kubevirt.io/v1
      kind: VirtualMachineInstanceMigration
      metadata:
        annotations:
          kubevirt.io/latest-observed-api-version: v1
          kubevirt.io/storage-observed-api-version: v1
          kubevirt.io/workloadUpdateMigration: ""
        creationTimestamp: "2025-04-23T06:07:47Z"
        generateName: kubevirt-workload-update-
        generation: 1
        labels:
          kubevirt.io/vmi-name: rhel-9-cyan-clam-69
          kubevirt.io/volume-update-migration: rhel-9-cyan-clam-69
        name: kubevirt-workload-update-7qfwd
        namespace: test-ns-2
        resourceVersion: "6381696"
        uid: 92112ace-667a-4b8d-96ae-2d977f9347f1
      spec:
        vmiName: rhel-9-cyan-clam-69
      status:
        migrationState:
          abortStatus: Succeeded
          completed: true
          endTimestamp: "2025-04-23T06:13:14Z"
          failed: true
          failureReason: 'Live migration aborted '
          migrationConfiguration:
            allowAutoConverge: false
            allowPostCopy: false
            allowWorkloadDisruption: false
            bandwidthPerMigration: "0"
            completionTimeoutPerGiB: 150
            nodeDrainTaintKey: kubevirt.io/drain
            parallelMigrationsPerCluster: 5
            parallelOutboundMigrationsPerNode: 2
            progressTimeout: 150
            unsafeMigrationOverride: false
          migrationUid: 92112ace-667a-4b8d-96ae-2d977f9347f1
          mode: PreCopy
          sourceNode: c01-jp419-6-r4qrr-worker-0-xfmgp
          sourcePod: virt-launcher-rhel-9-cyan-clam-69-92chz
          startTimestamp: "2025-04-23T06:13:14Z"
          targetDirectMigrationNodePorts:
            "38715": 0
            "40337": 49152
            "45217": 49153
          targetNode: c01-jp419-6-r4qrr-worker-0-5c4kh
          targetNodeAddress: 10.129.2.69
          targetPod: virt-launcher-rhel-9-cyan-clam-69-qnhgs
        phase: Failed
        phaseTransitionTimestamps:
        - phase: Pending
          phaseTransitionTimestamp: "2025-04-23T06:07:47Z"
        - phase: Scheduling
          phaseTransitionTimestamp: "2025-04-23T06:13:06Z"
        - phase: Scheduled
          phaseTransitionTimestamp: "2025-04-23T06:13:14Z"
        - phase: PreparingTarget
          phaseTransitionTimestamp: "2025-04-23T06:13:14Z"
        - phase: TargetReady
          phaseTransitionTimestamp: "2025-04-23T06:13:14Z"
        - phase: Running
          phaseTransitionTimestamp: "2025-04-23T06:13:14Z"
        - phase: Failed
          phaseTransitionTimestamp: "2025-04-23T06:13:14Z"
      $ oc get vm -n test-ns-2 rhel-9-cyan-clam-69 -oyaml
      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        annotations:
          kubemacpool.io/transaction-timestamp: "2025-04-23T05:46:07.291170076Z"
          kubevirt.io/latest-observed-api-version: v1
          kubevirt.io/storage-observed-api-version: v1
        creationTimestamp: "2025-04-22T11:45:11Z"
        finalizers:
        - kubevirt.io/virtualMachineControllerFinalize
        generation: 3
        name: rhel-9-cyan-clam-69
        namespace: test-ns-2
        resourceVersion: "6381694"
        uid: 46247639-e007-4ef1-91c8-f7c7e35f8f6d
      spec:
        dataVolumeTemplates:
        - metadata:
            creationTimestamp: null
            name: rhel-9-cyan-clam-69-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources:
                requests:
                  storage: 30Gi
        instancetype:
          kind: virtualmachineclusterinstancetype
          name: u1.small
        preference:
          kind: virtualmachineclusterpreference
          name: rhel.9
        runStrategy: Always
        template:
          metadata:
            creationTimestamp: null
            labels:
              network.kubevirt.io/headlessService: headless
          spec:
            architecture: amd64
            domain:
              devices:
                autoattachPodInterface: false
                interfaces:
                - macAddress: 02:a2:9f:00:00:09
                  masquerade: {}
                  name: default
              machine:
                type: pc-q35-rhel9.4.0
              resources: {}
            networks:
            - name: default
              pod: {}
            subdomain: headless
            volumes:
            - dataVolume:
                name: rhel-9-cyan-clam-69-volume
              name: rootdisk
            - cloudInitNoCloud:
                userData: |
                  #cloud-config
                  chpasswd:
                    expire: false
                  password: bj6x-s2zt-v3wt
                  user: rhel
              name: cloudinitdisk
        updateVolumesStrategy: Migration
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2025-04-22T13:09:02Z"
          status: "True"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: null
          message: All of the VMI's DVs are bound and not running
          reason: AllDVsReady
          status: "True"
          type: DataVolumesReady
        - lastProbeTime: null
          lastTransitionTime: null
          status: "True"
          type: LiveMigratable
        - lastProbeTime: null
          lastTransitionTime: null
          status: "True"
          type: StorageLiveMigratable
        - lastProbeTime: "2025-04-22T11:46:13Z"
          lastTransitionTime: null
          status: "True"
          type: AgentConnected
        - lastProbeTime: null
          lastTransitionTime: "2025-04-23T05:46:07Z"
          message: migrate volumes
          status: "True"
          type: VolumesChange
        created: true
        desiredGeneration: 3
        instancetypeRef:
          controllerRevisionRef:
            name: rhel-9-cyan-clam-69-u1.small-v1beta1-f26bbe8f-d791-4229-9011-ea3ccea531b5-1
          kind: virtualmachineclusterinstancetype
          name: u1.small
        observedGeneration: 3
        preferenceRef:
          controllerRevisionRef:
            name: rhel-9-cyan-clam-69-rhel.9-v1beta1-b1007ee5-1cf4-4aca-b2db-acd15de1a70a-1
          kind: virtualmachineclusterpreference
          name: rhel.9
        printableStatus: Running
        ready: true
        runStrategy: Always
        volumeSnapshotStatuses:
        - enabled: true
          name: rootdisk
        - enabled: false
          name: cloudinitdisk
          reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]
        volumeUpdateState:
          volumeMigrationState:
            migratedVolumes:
            - destinationPVCInfo:
                claimName: rhel-9-cyan-clam-69-volume
                volumeMode: Block
              sourcePVCInfo:
                claimName: rhel-9-cyan-clam-69-volume-mig-lswk
                volumeMode: Block
              volumeName: rootdisk
      

              rhn-support-awels Alexander Wels
              jpeimer@redhat.com Jenia Peimer
              Jenia Peimer Jenia Peimer
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: