-
Bug
-
Resolution: Unresolved
-
Normal
-
OADP 1.3.0
-
4
-
False
-
-
False
-
ToDo
-
-
-
0
-
0.000
-
Very Likely
-
0
-
None
-
Unset
-
Unknown
-
No
Description of problem:
I executed the native datamover backup and restore on same namespace after successful backup and restore via Kopia. I noticed velero is restoring restore-wait init container even though it's not required in this case.
Version-Release number of selected component (if applicable):
OADP 1.3.1-49
How reproducible:
Always
Steps to Reproduce:
1. Deploy an application consisting PVC's
2. Trigger a file system backup
3. Delete app namepace
4. Execute Restore
5. Now backup the same namespace via native datamover or CSI
6. Delete app namespace
7. Execute restore
Actual results:
Application pod has the init container added in it.
$ oc get pod -n ocp-mysql initContainers: - args: - 926e0ef5-5038-49d4-8f63-259acee86e42 command: - /velero-restore-helper env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name image: registry.redhat.io/oadp/oadp-velero-restic-restore-helper-rhel9@sha256:d2c7da625f2d1c6d54f6eb62d247acdc542e3a065f7e89c62e371b946a77c7e5 imagePullPolicy: IfNotPresent name: restore-wait
Expected results:
Velero should skip the restoring of initContainer spec.
Additional info:
I encountered an issue were my application pod was stuck at init container phase. I don't think it's an usual scenario just adding here for the awareness.
$ oc get pod -n ocp-mysql NAME READY STATUS RESTARTS AGE mysql-7f77d47fd7-694pn 0/1 Init:0/1 0 35m $ oc get pvc -n ocp-mysql NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound pvc-279cf584-f50a-4859-83fd-3354150278ec 2Gi RWO standard-csi 36m mysql-1 Bound pvc-df0ebad2-cb36-4e62-9eff-6e4f06be980f 2Gi RWO standard-csi 36m $ oc describe pod -n ocp-mysql Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 35m default-scheduler Successfully assigned ocp-mysql/mysql-7f77d47fd7-694pn to oadp-74411-q4bpw-worker-c-z6m65 Normal SuccessfulAttachVolume 35m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-df0ebad2-cb36-4e62-9eff-6e4f06be980f" Normal SuccessfulAttachVolume 35m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-279cf584-f50a-4859-83fd-3354150278ec" Normal AddedInterface 35m multus Add eth0 [10.131.0.23/23] from ovn-kubernetes Normal Pulled 35m kubelet Container image "registry.redhat.io/oadp/oadp-velero-restic-restore-helper-rhel9@sha256:d2c7da625f2d1c6d54f6eb62d247acdc542e3a065f7e89c62e371b946a77c7e5" already present on machine Normal Created 35m kubelet Created container restore-wait Normal Started 35m kubelet Started container restore-wait
Steps I followed.
1. Deployed an application with PVCs
2. First I triggered the FileSystem Backup with kopia
3. Removed the app namespace and executed the restore
4. Again I backed up the same namespace with CSI
5. Followed the same procedure delete the app namespace and ran restore.
6. Again I backed the same namespace with native dataMover.
7. Deleted the app namespace and ran restore.
Conversation on forum-oadp https://redhat-internal.slack.com/archives/C0144ECKUJ0/p1710940866491269