-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
None
-
None
-
0.42
-
False
-
-
False
-
ToDo
-
Description of problem:
dafrank@redhat.com was running the tests with GPFS and encountered this failure:
VM with standalone DV failed to migrate Storage class: ibm-spectrum-scale-sample
Version-Release number of selected component (if applicable):
4.19
How reproducible:
Steps to Reproduce:
1. Create a DV 2. Create a VM that will boot from this DV 3. Create a MigPlan and MigMigration to migrate this VM
Actual results:
$ oc get MigPlan -A NAMESPACE NAME READY SOURCE TARGET STORAGE AGE openshift-migration storage-mig-plan True host host 46m $ oc get migmigration -A NAMESPACE NAME READY PLAN STAGE ROLLBACK ITINERARY PHASE AGE openshift-migration mig-migration-storage storage-mig-plan false Stage Completed 46m $ oc get migmigration -n openshift-migration mig-migration-storage -oyaml apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: annotations: openshift.io/touch: f451c7d9-500d-11f0-a17a-0a580a810307 creationTimestamp: "2025-06-23T08:35:46Z" generation: 43 labels: migration.openshift.io/migplan-name: storage-mig-plan migration.openshift.io/migration-uid: c232c366-3094-41ef-b02a-dc4c10672f17 name: mig-migration-storage namespace: openshift-migration ownerReferences: - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan name: storage-mig-plan uid: 8799ecf1-e435-4c7d-a7ca-e7b032f3334d resourceVersion: "8683782" uid: c232c366-3094-41ef-b02a-dc4c10672f17 spec: migPlanRef: name: storage-mig-plan namespace: openshift-migration migrateState: true quiescePods: true stage: false status: conditions: - category: Advisory durable: true lastTransitionTime: "2025-06-23T08:42:11Z" message: The migration has completed successfully. reason: Completed status: "True" type: Succeeded itinerary: Stage observedDigest: 458005d0e00f41ae25c448db4999139312412b60b9781415cdfb5e88fba3dd04 phase: Completed pipeline: - completed: "2025-06-23T08:35:55Z" message: Completed name: Prepare started: "2025-06-23T08:35:46Z" - completed: "2025-06-23T08:35:56Z" message: Completed name: StageBackup started: "2025-06-23T08:35:55Z" - completed: "2025-06-23T08:42:10Z" message: Completed name: DirectVolume progress: - '[fedora] storage-migration-test-mtc-storage-class-migration/blockrsync-nwwr7: Completed 100% (30s)' - '[standalone-dv-fedora] Live Migration storage-migration-test-mtc-storage-class-migration/fedora-vm-with-existing-dv-1750667610-3898022: Failed (0s)' started: "2025-06-23T08:35:56Z" - completed: "2025-06-23T08:42:11Z" message: Completed name: Cleanup started: "2025-06-23T08:42:10Z" startTimestamp: "2025-06-23T08:35:46Z" $ oc get MigPlan -n openshift-migration storage-mig-plan -oyaml apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: annotations: openshift.io/touch: f462808a-500d-11f0-a17a-0a580a810307 creationTimestamp: "2025-06-23T08:35:42Z" generation: 8 name: storage-mig-plan namespace: openshift-migration resourceVersion: "8683783" uid: 8799ecf1-e435-4c7d-a7ca-e7b032f3334d spec: destMigClusterRef: name: host namespace: openshift-migration liveMigrate: true namespaces: - storage-migration-test-mtc-storage-class-migration persistentVolumes: - capacity: 34400Mi name: pvc-146d0cbc-e799-4b67-a0a9-1854e70d71ac proposedCapacity: "0" pvc: accessModes: - auto hasReference: true name: fedora:fedora-mig-5tn8 namespace: storage-migration-test-mtc-storage-class-migration ownerType: VirtualMachine volumeMode: auto selection: action: copy copyMethod: filesystem storageClass: ibm-spectrum-scale-sample storageClass: ibm-spectrum-scale-sample supported: actions: - skip - copy copyMethods: - filesystem - block - snapshot - capacity: "11362347344" name: pvc-d766bddb-64c0-49c7-8b64-cb796029f8ee proposedCapacity: 11363M pvc: accessModes: - auto hasReference: true name: standalone-dv-fedora:standalone-dv-fedora-mig-5tn8 namespace: storage-migration-test-mtc-storage-class-migration ownerType: VirtualMachine volumeMode: auto selection: action: copy copyMethod: filesystem storageClass: ibm-spectrum-scale-sample storageClass: ibm-spectrum-scale-sample supported: actions: - skip - copy copyMethods: - filesystem - block - snapshot srcMigClusterRef: name: host namespace: openshift-migration status: conditions: - category: Advisory durable: true lastTransitionTime: "2025-06-23T08:35:50Z" message: The migration plan was previously used for Storage Conversion. It can only be used for further Storage Conversions. Other migrations will be possible only after a successful rollback is performed. reason: StorageConversionPlan status: "True" type: MigrationTypeIdentified - category: Warn durable: true lastTransitionTime: "2025-06-23T08:35:50Z" message: 'Found Pods with non-default `Spec.NodeSelector` set in namespaces: [storage-migration-test-mtc-storage-class-migration]. This field will be cleared on Pods restored into the target cluster.' reason: NodeSelectorsDetected status: "True" type: NamespacesHaveNodeSelectors - category: Required lastTransitionTime: "2025-06-23T08:35:45Z" message: The `persistentVolumes` list has been updated with discovered PVs. reason: Done status: "True" type: PvsDiscovered - category: Warn durable: true lastTransitionTime: "2025-06-23T08:42:11Z" message: 'Migrating data of following volumes may result in a failure either due to mismatch in their requested and actual capacities or disk usage being close to 100%: [standalone-dv-fedora]' reason: NotDone status: "True" type: PvCapacityAdjustmentRequired - category: Warn lastTransitionTime: "2025-06-23T08:42:11Z" message: 'Failed to compute PV resizing data for the following volumes. PV resizing will be disabled for these volumes and the migration may fail if the volumes are full or their requested and actual capacities differ in the source cluster. Please ensure that the volumes are attached to one or more running Pods for PV resizing to work correctly: [fedora]' reason: NotDone status: "True" type: PvUsageAnalysisFailed - category: Required lastTransitionTime: "2025-06-23T08:35:45Z" message: The migration plan is ready. status: "True" type: Ready - category: Advisory lastTransitionTime: "2025-06-23T08:35:59Z" message: The migrations plan is in suspended state; Limited validation enforced; PV discovery and resource reconciliation suspended. status: "True" type: Suspended destStorageClasses: - name: ibm-spectrum-scale-internal provisioner: kubernetes.io/no-provisioner volumeAccessModes: - accessModes: - ReadWriteOnce volumeMode: Filesystem - default: true name: ibm-spectrum-scale-sample provisioner: spectrumscale.csi.ibm.com volumeAccessModes: - accessModes: - ReadWriteOnce volumeMode: Filesystem excludedResources: - imagetags - templateinstances - clusterserviceversions - packagemanifests - subscriptions - servicebrokers - servicebindings - serviceclasses - serviceinstances - serviceplans - operatorgroups - events - events.events.k8s.io - rolebindings.authorization.openshift.io observedDigest: 1c5d782a79dde16e7dd9ace591c9d51b1ea8eef29ee1c24dc3b2ba06e6e5ee8f srcStorageClasses: - name: ibm-spectrum-scale-internal provisioner: kubernetes.io/no-provisioner volumeAccessModes: - accessModes: - ReadWriteOnce volumeMode: Filesystem - default: true name: ibm-spectrum-scale-sample provisioner: spectrumscale.csi.ibm.com volumeAccessModes: - accessModes: - ReadWriteOnce volumeMode: Filesystem suffix: 5tn8 $ oc get vmim -A NAMESPACE NAME PHASE VMI storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-55sg8 Failed fedora-vm-with-existing-dv-1750667610-3898022 storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-ml4wd Failed fedora-vm-with-existing-dv-1750667610-3898022 storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-rvn65 Failed fedora-vm-with-existing-dv-1750667610-3898022 storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-sfdwf Failed fedora-vm-with-existing-dv-1750667610-3898022 storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-vd52g Pending fedora-vm-with-existing-dv-1750667610-3898022 storage-migration-test-mtc-storage-class-migration kubevirt-workload-update-vzcl9 Failed fedora-vm-with-existing-dv-1750667610-3898022
Expected results:
MigMigration reports FAILED or MigMigration secceeds to migrate
Additional info:
$ oc get vm -n storage-migration-test-mtc-storage-class-migration fedora-vm-with-existing-dv-1750667610-3898022 -oyaml apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: kubemacpool.io/transaction-timestamp: "2025-06-23T08:37:02.782553243Z" kubevirt.io/latest-observed-api-version: v1 kubevirt.io/storage-observed-api-version: v1 vm.kubevirt.io/validations: | [ { "name": "minimal-required-memory", "path": "jsonpath::.spec.domain.memory.guest", "rule": "integer", "message": "This VM requires more memory.", "min": 2147483648 } ] creationTimestamp: "2025-06-23T08:33:30Z" finalizers: - kubevirt.io/virtualMachineControllerFinalize generation: 3 labels: app: fedora-vm-with-existing-dv-1750667610-3898022 kubevirt.io/dynamic-credentials-support: "true" vm.kubevirt.io/template: fedora-server-small vm.kubevirt.io/template.namespace: openshift vm.kubevirt.io/template.revision: "1" vm.kubevirt.io/template.version: v0.34.0 name: fedora-vm-with-existing-dv-1750667610-3898022 namespace: storage-migration-test-mtc-storage-class-migration resourceVersion: "8678079" uid: 2c2097f4-9516-4254-b52b-d8d19e185e28 spec: runStrategy: Always template: metadata: annotations: vm.kubevirt.io/flavor: small vm.kubevirt.io/os: fedora vm.kubevirt.io/workload: server creationTimestamp: null labels: debugLogs: "true" kubevirt.io/domain: fedora-vm-with-existing-dv-1750667610-3898022 kubevirt.io/size: small kubevirt.io/vm: fedora-vm-with-existing-dv-1750667610-3898022 spec: architecture: amd64 domain: cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - macAddress: 02:2b:2a:00:00:0e masquerade: {} model: virtio name: default rng: {} features: acpi: {} smm: enabled: true firmware: bootloader: efi: {} machine: type: pc-q35-rhel9.6.0 memory: guest: 2Gi resources: {} networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: standalone-dv-fedora-mig-5tn8 name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: fedora password: password chpasswd: { expire: False } ssh_authorized_keys: [ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC3BntwJs36IxYz7tX1KRlecInd20xoeZzjY5bI2TNZxIpXeANp5vz+BGZQc6co1E9Jt8mi1N0Kf/iPMfhexNV8is2kmaKYPbAnKGGYcHUG6wWEJmLhhcExs6T4OpIB4R4tzre5HS12x/BzG/3eMoMev3yzfzfcFgD3+8j/K9y5r6V6niq98Ckso9/ZFEL/ZqhX6NEIHy3nSVKGkA3JA2R5USMkWtoUj8FmoHe3oTt4cSLGtUKaY9SJzDKx7x1CCN3dsP2wkfF0iky3DcwRt2MLML8owqHXfV8H922ATHAQcIPLjgvvJL9esdgMCTH7GHMCvCFiVKLON3bQjTv/br05 root@exec1.rdocloud] runcmd: ['grep ssh-rsa /etc/crypto-policies/back-ends/opensshserver.config || sudo update-crypto-policies --set LEGACY || true', "sudo sed -i 's/^#\\?PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config", 'sudo systemctl enable sshd', 'sudo systemctl restart sshd'] name: cloudinitdisk updateVolumesStrategy: Migration status: conditions: - lastProbeTime: null lastTransitionTime: "2025-06-23T08:33:44Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: null message: All of the VMI's DVs are bound and ready reason: AllDVsReady status: "True" type: DataVolumesReady - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable - lastProbeTime: null lastTransitionTime: null status: "True" type: StorageLiveMigratable - lastProbeTime: "2025-06-23T08:34:08Z" lastTransitionTime: null status: "True" type: AgentConnected - lastProbeTime: null lastTransitionTime: "2025-06-23T08:37:02Z" message: migrate volumes status: "True" type: VolumesChange created: true desiredGeneration: 3 observedGeneration: 2 printableStatus: Running ready: true runStrategy: Always volumeSnapshotStatuses: - enabled: true name: rootdisk - enabled: false name: cloudinitdisk reason: Snapshot is not supported for this volumeSource type [cloudinitdisk] volumeUpdateState: volumeMigrationState: migratedVolumes: - destinationPVCInfo: claimName: standalone-dv-fedora-mig-5tn8 volumeMode: Filesystem sourcePVCInfo: claimName: standalone-dv-fedora volumeMode: Filesystem volumeName: rootdisk
$ oc get dv -n storage-migration-test-mtc-storage-class-migration standalone-dv-fedora -oyaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: annotations: cdi.kubevirt.io/storage.usePopulator: "true" creationTimestamp: "2025-06-23T08:31:58Z" generation: 1 name: standalone-dv-fedora namespace: storage-migration-test-mtc-storage-class-migration resourceVersion: "8673832" uid: b1cfc7df-95da-4111-8a9f-1e640d6e0fb7 spec: contentType: kubevirt source: http: certConfigMap: artifactory-configmap secretRef: cnv-tests-artifactory-secret url: <cnv-tests/fedora-images/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2> storage: resources: requests: storage: 10Gi storageClassName: ibm-spectrum-scale-sample status: claimName: standalone-dv-fedora conditions: - lastHeartbeatTime: "2025-06-23T08:33:30Z" lastTransitionTime: "2025-06-23T08:33:30Z" message: PVC standalone-dv-fedora Bound reason: Bound status: "True" type: Bound - lastHeartbeatTime: "2025-06-23T08:33:30Z" lastTransitionTime: "2025-06-23T08:33:30Z" status: "True" type: Ready - lastHeartbeatTime: "2025-06-23T08:33:29Z" lastTransitionTime: "2025-06-23T08:33:29Z" message: Import Complete reason: Completed status: "False" type: Running phase: Succeeded progress: 100.0%
$ oc get dv -n storage-migration-test-mtc-storage-class-migration | grep standalone standalone-dv-fedora Succeeded 100.0% 43m standalone-dv-fedora-mig-5tn8 Succeeded 100.0% 39m $ oc get pvc -n storage-migration-test-mtc-storage-class-migration | grep standalone standalone-dv-fedora Bound pvc-d766bddb-64c0-49c7-8b64-cb796029f8ee 11362347344 RWX ibm-spectrum-scale-sample <unset> 43m standalone-dv-fedora-mig-5tn8 Bound pvc-a148c099-b79d-41c4-a98d-e62b7918eb43 11362347344 RWX ibm-spectrum-scale-sample <unset> 39m