-
Bug
-
Resolution: Unresolved
-
Major
-
CNV v4.18.17, CNV v4.20.0
-
None
-
Incidents & Support
-
5
-
False
-
-
False
-
CNV v4.21.0.rhel9-72, CNV v4.18.25.rhel9-9, CNV v4.20.3.rhel9-31, CNV v4.19.17.rhel9-27
-
-
CNV Storage Sprint 283
-
Important
-
Customer Reported
-
None
Description of problem:
When restoring a snapshot of a VM with persistent-state (TPM/EFI), the newly created PVC being restore from the VolumeSnapshot has spec.requests.storage higher than the original PVC, and also higher than the Restore Size of the VolumeSnapshot. Some CSI do not like this, and will deny creating the restore PVC from VolumeSnapshot. And will break the restore with something like this: failed to provision volume with StorageClass "<removed>": rpc error: code = InvalidArgument desc = runid=6277 Requested size 3408704204 should be same as source snapshot size 3221225472
Version-Release number of selected component (if applicable):
4.20.0
How reproducible:
Always
Steps to Reproduce:
1. Create a VM with backend storage 2. Snapshot it 3. Restore the snapshot 4. Compare the persistent-state PVC sizes
Actual results:
Keeps increasing on restore
Expected results:
Stays the same
Additional info:
[1] PVC after VM creation:
# oc get pvc persistent-state-for-windows-2022-lg7m9 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: topolvm.io
volume.kubernetes.io/selected-node: cyan.shift.home.arpa
volume.kubernetes.io/storage-provisioner: topolvm.io
creationTimestamp: "2025-11-11T23:56:26Z"
finalizers:
- kubernetes.io/pvc-protection
generateName: persistent-state-for-windows-2022-
labels:
cdi.kubevirt.io/applyStorageProfile: "true"
persistent-state-for: windows-2022
name: persistent-state-for-windows-2022-lg7m9
namespace: homelab
ownerReferences:
- apiVersion: kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: VirtualMachine
name: windows-2022
uid: c8e3f12a-b123-4112-ab81-46451d3ec10a
resourceVersion: "32220535"
uid: 21b2c237-615c-4971-8582-cc97303edeec
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "11114906"
storageClassName: lvms-nvme
volumeMode: Filesystem
volumeName: pvc-21b2c237-615c-4971-8582-cc97303edeec
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 300Mi
phase: Bound
[2] Create snapshot, check the Restore Size of the VolumeSnapshot, its 300M.
# oc get volumesnapshot vmsnapshot-6100c8ae-87e7-4836-b141-1b71d6767825-volume-persistent-state-for-windows-2022 -o yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
creationTimestamp: "2025-11-12T00:16:57Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
- snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
generation: 1
labels:
snapshot.kubevirt.io/source-vm-name: windows-2022
snapshot.kubevirt.io/source-vm-namespace: homelab
name: vmsnapshot-6100c8ae-87e7-4836-b141-1b71d6767825-volume-persistent-state-for-windows-2022
namespace: homelab
ownerReferences:
- apiVersion: snapshot.kubevirt.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: VirtualMachineSnapshotContent
name: vmsnapshot-content-6100c8ae-87e7-4836-b141-1b71d6767825
uid: a476c7c6-1aab-460a-8c31-72ab20183273
resourceVersion: "32244709"
uid: 383b0bd7-9411-4dbb-bc98-051618e7aaa9
spec:
source:
persistentVolumeClaimName: persistent-state-for-windows-2022-lg7m9
volumeSnapshotClassName: lvms-nvme
status:
boundVolumeSnapshotContentName: snapcontent-383b0bd7-9411-4dbb-bc98-051618e7aaa9
creationTime: "2025-11-12T00:16:57Z"
readyToUse: true
restoreSize: 300Mi
Check the original PVC size in the VolumeBackups, matches [1].
# oc get virtualmachinesnapshotcontents vmsnapshot-content-6100c8ae-87e7-4836-b141-1b71d6767825 -o yaml | yq '.spec.volumeBackups[1]'
persistentVolumeClaim:
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: topolvm.io
volume.kubernetes.io/selected-node: cyan.shift.home.arpa
volume.kubernetes.io/storage-provisioner: topolvm.io
creationTimestamp: "2025-11-11T23:56:26Z"
finalizers:
- kubernetes.io/pvc-protection
generateName: persistent-state-for-windows-2022-
labels:
cdi.kubevirt.io/applyStorageProfile: "true"
persistent-state-for: windows-2022
name: persistent-state-for-windows-2022-lg7m9
namespace: homelab
ownerReferences:
- apiVersion: kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: VirtualMachine
name: windows-2022
uid: c8e3f12a-b123-4112-ab81-46451d3ec10a
resourceVersion: "32220535"
uid: 21b2c237-615c-4971-8582-cc97303edeec
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "11114906"
storageClassName: lvms-nvme
volumeMode: Filesystem
volumeName: pvc-21b2c237-615c-4971-8582-cc97303edeec
volumeName: persistent-state-for-windows-2022
volumeSnapshotName: vmsnapshot-6100c8ae-87e7-4836-b141-1b71d6767825-volume-persistent-state-for-windows-2022
So it should be 300M, as that is the highest between RestoreSize and the original PVC size as per https://github.com/kubevirt/kubevirt/blob/a88fde37c8adec0390c422002435e1fbdcb7d0c7/pkg/storage/snapshot/restore.go#L1601
But its not, its adding more space:
# oc get pvc restore-25773304-8d59-4e94-b86e-347369d4a46b-persistent-state-for-windows-2022 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
restore.kubevirt.io/lastRestoreUID: restore-windows-2022-snapshot-20251112-101653-1762906813362-25773304-8d59-4e94-b86e-347369d4a46b
restore.kubevirt.io/name: restore-windows-2022-snapshot-20251112-101653-1762906813362
creationTimestamp: "2025-11-12T00:21:11Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
cdi.kubevirt.io/applyStorageProfile: "true"
persistent-state-for: windows-2022
restore.kubevirt.io/source-vm-name: windows-2022
restore.kubevirt.io/source-vm-namespace: homelab
name: restore-25773304-8d59-4e94-b86e-347369d4a46b-persistent-state-for-windows-2022
namespace: homelab
ownerReferences:
- apiVersion: kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: VirtualMachine
name: windows-2022
uid: c8e3f12a-b123-4112-ab81-46451d3ec10a
resourceVersion: "32248908"
uid: 58e551f2-3ee1-4d63-b72c-a9fd0905369c
spec:
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: vmsnapshot-6100c8ae-87e7-4836-b141-1b71d6767825-volume-persistent-state-for-windows-2022
dataSourceRef:
apiGroup: snapshot.storage.k8s.io
kind: VolumeSnapshot
name: vmsnapshot-6100c8ae-87e7-4836-b141-1b71d6767825-volume-persistent-state-for-windows-2022
resources:
requests:
storage: "333447168"
storageClassName: lvms-nvme
volumeMode: Filesystem
status:
phase: Pending
For me it works because LVMS doesn't care about the size difference. But the customer CSI (Dell) doesn't like it.
I don't see a reason to keep increase the size on restore.
- clones
-
CNV-78172 [4.19] Restored persistent-state volume requests more storage than original volume and restore size.
-
- Verified
-
- links to
-
RHEA-2026:158276
OpenShift Virtualization 4.18.28 Images