-
Bug
-
Resolution: Done-Errata
-
Normal
-
None
-
False
-
-
False
-
MODIFIED
-
Release Notes
-
-
Known Issue
-
Done
-
---
-
---
-
-
Storage Core Sprint 240, Storage Core Sprint 241, Storage Core Sprint 242, Storage Core Sprint 243, Storage Core Sprint 246, Storage Core Sprint 247
-
High
-
No
Description of problem:
Frequently when creating virtual machines we see they are stuck on pending forever. During investigation we saw datavolumes that they fail to create and are stuck forever with no status update. When looking at logs in the cdi-deployment pod in openshift-cnv we see logs like this:
```
```
After some investigation we found that the token in the DV annotations:
metadata:
annotations:
cdi.kubevirt.io/storage.clone.token: <jwt>
The JWT is in fact expired, and is only valid for 5 minutes.
The status of the DV is just empty like this:
```
status: {}
```
This doesn't reproduce all the time.
In the initial set of logs it took a little more than 8 minutes from when the DV was created (as can bee seen in the creationTimestamp) until the first log. 2023-06-05T06:44:58Z -> 1685948026.3881886 (2023-06-05T06:53:46Z)
Regarding the time sync issue suggested, I verified there is no difference in the time between different nodes in the cluster, and they are all connected to the same NTP server.
The logs from this time are already attached, including the datavolume yaml.
dv-expire.tar.gz/dv.yaml: creationTimestamp: "2023-06-05T06:44:58Z"
===
cdi-extended.tar.gz/cdi-deployment.log
===
{"level":"error","ts":1685948026.3953767,"logger":"controller.datavolume-controller","msg":"Reconciler error","name":"affected-vm-1-rootdisk","namespace":"mongodb","error":"error verifying token: square/go-jose/jwt: validation failed, token is expired (exp)","errorVerbose":"square/go-jose/jwt: validation failed, token is expired (exp)\nerror verifying token\nkubevirt.io/containerized-data-importer/pkg/controller.validateCloneTokenDV\n\t/remote-source/app/pkg/controller/util.go:876\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).initTransfer\n\t/remote-source/app/pkg/controller/datavolume-controller.go:1156\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).doCrossNamespaceClone\n\t/remote-source/app/pkg/controller/datavolume-controller.go:896\nkubevirt.io/containerized-data-importer/pkg/controller.(*DatavolumeReconciler).reconcileSmartClonePvc\n\t/remote-source/app/pkg/
Customer then managed to reproduce this issue in a pre prod online environment, by creating a few hundred VMs. A 100 of the VMs have a datavolume configuration that doesn't work - it tries to copy the PVC from a different storage class. The rest of the VMs are completely regular - they should be created normally, but as can be seen there are 292 datavolumes that took more than 5 minutes to be acknowledged and are stuck in limbo since the JWT expired.
For the attached to the case we have CNV must gather, as well as openshift mustgather from the reproduction.
- external trackers
- links to
-
RHEA-2023:125070 OpenShift Virtualization 4.14.2 Images
-
RHEA-2024:125986 OpenShift Virtualization 4.14.3 Images