-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
CNV v4.19.12, CNV v4.20.1
-
None
-
Quality / Stability / Reliability
-
0.42
-
False
-
-
False
-
None
-
-
Critical
-
None
Description of problem:
The virtual machine had the spec.preference set as shown below:
# oc get vm rhel9-crimson-swordtail-83 -o yaml|yq '.spec.preference'
kind: VirtualMachineClusterPreference
name: rhel.9
And it have "preferenceRef" in status:
# oc get vm rhel9-crimson-swordtail-83 -o yaml|yq '.status.preferenceRef'
controllerRevisionRef:
name: rhel9-crimson-swordtail-83-rhel.9-v1beta1-b01bc5bd-3dd5-4aa0-8da7-68c3e64241f5-1
kind: VirtualMachineClusterPreference
name: rhel.9
When I removed the preference from the VM spec and stopped and started the VM again, both virt-controller pods crashed:
% oc get pod -n openshift-cnv |grep virt-controller virt-controller-5ffc7678b9-2q2r8 0/1 Error 0 4d21h virt-controller-5ffc7678b9-kh89z 0/1 CrashLoopBackOff 1 (18s ago) 4d21h
The logs have the following error while starting the VM:
{"component":"virt-controller","kind":"VirtualMachine","level":"info","msg":"Starting VM due to runStrategy: RerunOnFailure","name":"rhel9-crimson-swordtail-83","namespace":"nijin-cnv","pos":"vm.go:1034","timestamp":"2025-11-30T09:52:21.207960Z","uid":"ada33e02-abf7-4199-9455-1536cfc60a71"} E1130 09:52:21.209795 1 panic.go:262] "Observed a panic" panic="runtime error: invalid memory address or nil pointer dereference" panicGoValue="\"invalid memory address or nil pointer dereference\"" stacktrace=< goroutine 1460 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2a92a20, 0x414e600}, {0x22b3cc0, 0x3fcdba0}) /remote-source/app/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:107 +0xbc k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x2a92a20, 0x414e600}, {0x22b3cc0, 0x3fcdba0}, {0x414e600, 0x0, 0x10000c004e405a0?}) /remote-source/app/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:82 +0x5a k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x24?}) /remote-source/app/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:59 +0x105 panic({0x22b3cc0?, 0x3fcdba0?}) /usr/lib/golang/src/runtime/panic.go:792 +0x132 kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.patchVMRevision(0xc004ec2008?) /remote-source/app/pkg/virt-controller/watch/vm/vm.go:1670 +0xc1 kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.(*Controller).createVMRevision(0xc004018b00, 0xc004ec2008) /remote-source/app/pkg/virt-controller/watch/vm/vm.go:1837 +0x99 kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.(*Controller).startVMI(0xc004018b00, 0xc004ec2008) /remote-source/app/pkg/virt-controller/watch/vm/vm.go:1241 +0xb9c kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.(*Controller).syncRunStrategy(0xc004018b00, 0xc004ec2008, 0x0, {0xc004b071c0, 0xe}) /remote-source/app/pkg/virt-controller/watch/vm/vm.go:1035 +0x3f4f kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.(*Controller).sync(0xc004018b00, 0xc004ec2008, 0x0, {0xc004e405a0, 0x24}) /remote-source/app/pkg/virt-controller/watch/vm/vm.go:3171 +0xbbf kubevirt.io/kubevirt/pkg/virt-controller/watch/vm.(*Controller).execute(0xc004018b00, {0xc004e405a0, 0x24})
Issue is because the VM status still contains the preferenceRef:
# oc get vm rhel9-crimson-swordtail-83 -o yaml|yq '.status.preferenceRef'
controllerRevisionRef:
name: rhel9-crimson-swordtail-83-rhel.9-v1beta1-b01bc5bd-3dd5-4aa0-8da7-68c3e64241f5-1
kind: VirtualMachineClusterPreference
name: rhel.9
And is causing nil pointer at "vmCopy.Spec.Preference.RevisionName":
func patchVMRevision(vm *virtv1.VirtualMachine) ([]byte, error) { vmCopy := vm.DeepCopy() if revision.HasControllerRevisionRef(vmCopy.Status.InstancetypeRef) { vmCopy.Spec.Instancetype.RevisionName = vmCopy.Status.InstancetypeRef.ControllerRevisionRef.Name } if revision.HasControllerRevisionRef(vm.Status.PreferenceRef) { vmCopy.Spec.Preference.RevisionName = vm.Status.PreferenceRef.ControllerRevisionRef.Name } <===
Version-Release number of selected component (if applicable):
OpenShift Virtualization 4.20.1 Also reproducible on 4.19.12
How reproducible:
100%
Steps to Reproduce:
1. Create a VM with "preference" and start the same. 2. Remove the preference from the VM spec. 3. Stop and start the VM back. 4. Both the virt-controller pod crashes while the VM is started.
Actual results:
virt-controller pods are crashing if "spec.preference" is removed from the VM
Expected results:
Additional info: