-
Bug
-
Resolution: Done
-
Blocker
-
OADP 1.1.0, OADP 1.1.1, OADP 1.1.2, OADP 1.1.3
-
False
-
-
False
-
oadp-velero-plugin-for-csi-container-1.1.4-7
-
ToDo
-
-
-
0
-
0
-
Very Likely
-
0
-
Customer Escalated, Customer Facing
-
None
-
Unset
-
Unknown
-
No
Description of problem:
OADP 1.1 Data Mover could restore from incorrect snapshot if there exists more than one VolumeSnapshotRestore resource in the cluster for the same velero restore name AND pvc name.
In order to have create this scenario, you have to
1. run data mover backup1
2. restore1
3. run data mover backup2
4. restore2, may pick up snapshot from restore1
When restoring During volumesnapshots.snapshot.storage.k8s.io restoreItemAction when VOLUME_SNAPSHOT_MOVER Env is true We wait for VolumeSnapshot.spec.Source.PersistentVolumeClaimName to be populated
this is populated by volsync I think. We get VolumeSnapshotRestoreList using restoreName, PVC.Name.
The list option used to get this list is
VSRListOptions := client.MatchingLabels(map[string]string{ velerov1api.RestoreNameLabel: restoreName, PersistentVolumeClaimLabel: PVCName, })
When this list is returned, we assume the first volumeSnapshotRestore in the list is the one meant for this restore.
if len(vsrList.Items) > 0 { snapHandle = vsrList.Items[0].Status.SnapshotHandle } else {
If more than one VSR in the cluster has this label (failed cleanup etc.), we're screwed.
Restore issue workarounds
Try restore using different name
Try removing VolumeSnapshotRestore objects in the cluster prior to creating a new restore.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
https://gist.github.com/kaovilai/c9f15c725dd2b49d12501de340710cbc