-
Bug
-
Resolution: Done
-
Critical
-
2.9.5
-
None
-
Quality / Stability / Reliability
-
False
-
-
True
-
-
-
MTV Sprint 4
-
Important
-
Customer Reported
Description of problem:
When migrating a VM from one openshift cluster to another (same latest versions) the VirtualMachineCreation step fails with
"admission webhook "virtualmachine-validator.kubevirt.io" denied the request: spec.template.spec.domain.devices.disks[1].Name 'cloudinitdisk' not found."
Source vm yaml definition is using cloud init
~~~
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- macAddress: 02:90:f6:00:00:00
masquerade: {}
model: virtio
name: default
volumes:
- dataVolume:
name: fedora-fuchsia-hare-67
name: rootdisk
- cloudInitNoCloud:
userData: |-
#cloud-config
user: fedora
password: ujux-x6ax-jas3
chpasswd: { expire: False }
name: cloudinitdisk
~~~It would appear the vm yaml on the destination side may not be correctly getting applied as you can recreate this error when there is a mismatch between the disks.disk.name and what we see in volumes:
Version-Release number of selected component (if applicable):
Source and destination clusters are configured with latest versions: OCP 4.18.26 CNV 4.18.17 MTV 2.9.5
How reproducible:
Always
Steps to Reproduce:
1. Create two OCP clusters running 4.18.X 2. Create a VM that uses cloudInitNoCloud 3. Attempt to migrate VM
Actual results:
"admission webhook "virtualmachine-validator.kubevirt.io" denied the request: spec.template.spec.domain.devices.disks[1].Name 'cloudinitdisk' not found."
Expected results:
VM is created and starts up without issue.
Additional info:
You can work around this by detatching the cloudinit disk on the source vm, but you then have to re-add it back after migration which is not ideal.
I can confirm the vmexport which gets created on the source cluster contains the correct manifest
oc logs forklift-controller-7f4c746865-kqb6x
{"level":"info","ts":"2025-10-14 15:11:04.978","logger":"plan|766cx","msg":"Found vm in manifest","plan":{"name":"test","namespace":"shaggy"},"migration":"shaggy/test-qdckx","vm":{"kind":"VirtualMachine","apiVersion":"kubevirt.io/v1","metadata":{"name":"fedora-fuchsia-hare-67","namespace":"default","creationTimestamp":null,"labels":{"app":"fedora-fuchsia-hare-67","kubevirt.io/dynamic-credentials-support":"true","vm.kubevirt.io/template":"fedora-server-small","vm.kubevirt.io/template.namespace":"openshift","vm.kubevirt.io/template.revision":"1","vm.kubevirt.io/template.version":"v0.32.2"},"annotations":{"kubemacpool.io/transaction-timestamp":"2025-10-14T15:09:08.519933018Z","kubevirt.io/latest-observed-api-version":"v1","kubevirt.io/storage-observed-api-version":"v1","vm.kubevirt.io/validations":"[\n {\n \"name\": \"minimal-required-memory\",\n \"path\": \"jsonpath::.spec.domain.memory.guest\",\n \"rule\": \"integer\",\n \"message\": \"This VM requires more memory.\",\n \"min\": 2147483648\n }\n]\n"}},"spec":{"runStrategy":"Halted","template":{"metadata":{"creationTimestamp":null,"labels":{"kubevirt.io/domain":"fedora-fuchsia-hare-67","kubevirt.io/size":"small","network.kubevirt.io/headlessService":"headless"},"annotations":{"vm.kubevirt.io/flavor":"small","vm.kubevirt.io/os":"fedora","vm.kubevirt.io/workload":"server"}},"spec":{"domain":{"resources":{},"cpu":{"cores":1,"sockets":1,"threads":1},"memory":{"guest":"2Gi"},"machine":{"type":"pc-q35-rhel9.4.0"},"firmware":{"bootloader":{"efi":{}}},"features":{"acpi":{},"smm":{"enabled":true}},"devices":{"disks":[{"name":"rootdisk","disk":{"bus":"virtio"}},{"name":"cloudinitdisk","disk":{"bus":"virtio"}}],"interfaces":[{"name":"default","model":"virtio","masquerade":{},"macAddress":"02:90:f6:00:00:00"}],"rng":{}}},"terminationGracePeriodSeconds":180,"volumes":[{"name":"rootdisk","dataVolume":{"name":"fedora-fuchsia-hare-67"}},{"name":"cloudinitdisk","cloudInitNoCloud":{"userData":"#cloud-config\nuser: fedora\npassword: ujux-x6ax-jas3\nchpasswd: { expire: False }"}}],"networks":[{"name":"default","pod":{}}],"architecture":"amd64"}},"dataVolumeTemplates":[{"kind":"DataVolume","apiVersion":"cdi.kubevirt.io/v1beta1","metadata":{"name":"fedora-fuchsia-hare-67","creationTimestamp":null,"annotations":{"cdi.kubevirt.io/storage.bind.immediate.requested":"true"}},"spec":{"source":{"http":{"url":"https://virt-exportproxy-openshift-cnv.apps.ci-ln-82sy5n2-72292.origin-ci-int-gce.dev.rhcloud.com/api/export.kubevirt.io/v1beta1/namespaces/default/virtualmachineexports/fedora-fuchsia-hare-67/volumes/fedora-fuchsia-hare-67/disk.img.gz","certConfigMap":"export-ca-cm-fedora-fuchsia-hare-67","secretExtraHeaders":["header-secret-fedora-fuchsia-hare-67"]}},"storage":{"resources":{"requests":{"storage":"1Gi"}}}}}]},"status":{}}}
- is related to
-
MTV-2490 When using MTV cold migration from OCP cluster to OCP clusters, the resulting VM storage is not configured correctly
-
- POST
-