Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-61741

[4.19] Storage Migrate VM with MTC - can't VMClone

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Blocker Blocker
    • CNV v4.19.4
    • CNV v4.19.0
    • Storage Platform
    • None
    • Quality / Stability / Reliability
    • 8
    • False
    • Hide

      None

      Show
      None
    • False
    • CNV v4.99.0.rhel9-2360, CNV v4.19.3.rhel9-10
    • CNV Storage 274, CNV Storage 275
    • None

      Description of problem:

      Can't clone a VM that was Storage Migrated. 
      VMRestore can't succeed

      Version-Release number of selected component (if applicable):

      4.19

      How reproducible:

      Always

      Steps to Reproduce:

      1. Storage Migrate VM with MTC -> succeeded
      2. Delete the old virt-launcher pod and the old DV/PVC -> succeeded
      3. Create a VMClone - restore can't succeed 
      
      $ oc get vmclone -A
      NAMESPACE   NAME                                              PHASE               SOURCEVIRTUALMACHINE       TARGETVIRTUALMACHINE
      test-ns1    rhel-9-rose-barracuda-54-clone-tbvz3r-dvm5jw-cr   RestoreInProgress   rhel-9-rose-barracuda-54   rhel-9-rose-barracuda-54-clone-tbvz3r
      
      $ oc get vmrestore -A
      NAMESPACE   NAME                                               TARGETKIND       TARGETNAME                              COMPLETE   RESTORETIME   ERROR
      test-ns1    tmp-restore-7ed74739-df7e-4356-af25-12dd5a60fe8a   VirtualMachine   rhel-9-rose-barracuda-54-clone-tbvz3r   false      

      Actual results:

      VMRestore it's trying to find the original pvc that was used before the storage migration:   
      
      $ oc get vmrestore -n test-ns1 tmp-restore-7ed74739-df7e-4356-af25-12dd5a60fe8a -oyaml
      apiVersion: snapshot.kubevirt.io/v1beta1
      kind: VirtualMachineRestore
      metadata:
        creationTimestamp: "2025-05-14T08:12:19Z"
        finalizers:
        - snapshot.kubevirt.io/vmrestore-protection
        generation: 1
        name: tmp-restore-7ed74739-df7e-4356-af25-12dd5a60fe8a
        namespace: test-ns1
        ownerReferences:
        - apiVersion: clone.kubevirt.io/v1beta1
          blockOwnerDeletion: true
          controller: true
          kind: VirtualMachineClone
          name: rhel-9-rose-barracuda-54-clone-tbvz3r-dvm5jw-cr
          uid: 7ed74739-df7e-4356-af25-12dd5a60fe8a
        resourceVersion: "844643"
        uid: 4889c8f6-9913-44cf-be91-f7b637489ed7
      spec:
        patches:
        - '{"op":"replace","path":"/spec/template/spec/domain/devices/interfaces/0/macAddress","value":""}'
        target:
          apiGroup: kubevirt.io
          kind: VirtualMachine
          name: rhel-9-rose-barracuda-54-clone-tbvz3r
        virtualMachineSnapshotName: tmp-snapshot-7ed74739-df7e-4356-af25-12dd5a60fe8a
      status:
        complete: false
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2025-05-14T08:12:19Z"
          reason: when creating restore dv pvc test-ns1/rhel-9-rose-barracuda-54-volume
            does not exist and should
          status: "False"
          type: Progressing
        - lastProbeTime: null
          lastTransitionTime: "2025-05-14T08:12:19Z"
          reason: when creating restore dv pvc test-ns1/rhel-9-rose-barracuda-54-volume
            does not exist and should
          status: "False"
          type: Ready
        restores:
        - persistentVolumeClaim: restore-4889c8f6-9913-44cf-be91-f7b637489ed7-rootdisk
          volumeName: rootdisk
          volumeSnapshotName: vmsnapshot-e22a2d7e-74dc-4694-89fe-1b30d75820e0-volume-rootdisk
      

      Expected results:

      VMClone and VMRestore succeeded

      Additional info:

      VM yaml while VMClone is failing:
      
      $ oc get vm -n test-ns1 rhel-9-rose-barracuda-54 -oyaml
      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        annotations:
          kubevirt.io/latest-observed-api-version: v1
          kubevirt.io/storage-observed-api-version: v1
        creationTimestamp: "2025-05-14T06:35:35Z"
        finalizers:
        - kubevirt.io/virtualMachineControllerFinalize
        generation: 2
        name: rhel-9-rose-barracuda-54
        namespace: test-ns1
        resourceVersion: "844629"
        uid: 617f4176-6c9d-4da4-bacd-1af5d19a4737
      spec:
        dataVolumeTemplates:
        - metadata:
            creationTimestamp: null
            name: rhel-9-rose-barracuda-54-volume-mig-zj59
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources:
                requests:
                  storage: "34087042032"
              storageClassName: hostpath-csi-basic
        instancetype:
          kind: virtualmachineclusterinstancetype
          name: u1.small
        preference:
          kind: virtualmachineclusterpreference
          name: rhel.9
        runStrategy: Always
        template:
          metadata:
            creationTimestamp: null
            labels:
              network.kubevirt.io/headlessService: headless
          spec:
            architecture: amd64
            domain:
              devices:
                autoattachPodInterface: false
                interfaces:
                - macAddress: 02:64:4c:00:00:01
                  masquerade: {}
                  name: default
              machine:
                type: pc-q35-rhel9.4.0
              resources: {}
            networks:
            - name: default
              pod: {}
            subdomain: headless
            volumes:
            - dataVolume:
                name: rhel-9-rose-barracuda-54-volume-mig-zj59
              name: rootdisk
            - cloudInitNoCloud:
                userData: |
                  #cloud-config
                  chpasswd:
                    expire: false
                  password: 6trh-ypcf-ua7i
                  user: rhel
              name: cloudinitdisk
        updateVolumesStrategy: Migration
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: "2025-05-14T07:15:22Z"
          status: "True"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: null
          message: All of the VMI's DVs are bound and not running
          reason: AllDVsReady
          status: "True"
          type: DataVolumesReady
        - lastProbeTime: null
          lastTransitionTime: null
          status: "True"
          type: LiveMigratable
        - lastProbeTime: null
          lastTransitionTime: null
          status: "True"
          type: StorageLiveMigratable
        - lastProbeTime: "2025-05-14T06:36:30Z"
          lastTransitionTime: null
          status: "True"
          type: AgentConnected
        created: true
        desiredGeneration: 2
        instancetypeRef:
          controllerRevisionRef:
            name: rhel-9-rose-barracuda-54-u1.small-v1beta1-80691f2e-7484-4d65-9a2a-19e510d0227d-1
          kind: virtualmachineclusterinstancetype
          name: u1.small
        observedGeneration: 1
        preferenceRef:
          controllerRevisionRef:
            name: rhel-9-rose-barracuda-54-rhel.9-v1beta1-c11d50b4-24c7-4ce4-aaae-da54d27811f4-1
          kind: virtualmachineclusterpreference
          name: rhel.9
        printableStatus: Running
        ready: true
        runStrategy: Always
        volumeSnapshotStatuses:
        - enabled: true
          name: rootdisk
        - enabled: false
          name: cloudinitdisk
          reason: Snapshot is not supported for this volumeSource type [cloudinitdisk]
        volumeUpdateState:
          volumeMigrationState:
            migratedVolumes:
            - destinationPVCInfo:
                claimName: rhel-9-rose-barracuda-54-volume-mig-zj59
                volumeMode: Block
              sourcePVCInfo:
                claimName: rhel-9-rose-barracuda-54-volume
                volumeMode: Filesystem
              volumeName: rootdisk
      

              rh-ee-aaloni Adi Aloni
              jpeimer@redhat.com Jenia Peimer
              Jenia Peimer Jenia Peimer
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: