-
Bug
-
Resolution: Unresolved
-
Minor
-
OADP 1.3.0
-
3
-
False
-
-
False
-
oadp-operator-bundle-container-1.3.4-7
-
ToDo
-
-
-
0
-
0.000
-
Very Likely
-
0
-
None
-
Unset
-
Unknown
-
No
Description of problem:
I noticed an error when I was running some backups with different velero prefix on the same cluster. Velero does not update the resticIdentifier field when the prefix is changed which causes Kopia backups to fail immediately. After further testing I noticed that the backupRepository CR doesn't get updated when bucket and prefix field is changed. It works fine when DPA is patched live. In case of Restic it still uses the older path for the backup purpose(backup doesn't get failed in this case).
Version-Release number of selected component (if applicable):
OADP 1.3.0
How reproducible:
Always
Steps to Reproduce:
1. Create a DPA CR with any prefix.
apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: creationTimestamp: "2024-01-08T11:15:51Z" generation: 1 name: ts-dpa namespace: openshift-adp resourceVersion: "164679" uid: 4537472c-4fee-4923-81fb-d6d19fd62cd8 spec: backupLocations: - velero: credential: key: cloud name: cloud-credentials-gcp default: true objectStorage: bucket: oadpbucketoadp-66980-qtrj2 prefix: velero provider: gcp configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - gcp - openshift status: conditions: - lastTransitionTime: "2024-01-08T11:15:51Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled
2. Deploy a stateful application and create a backup with kopia. Wait for it to get completed successfully.
3. Delete DPA CR and re-create it with different prefix.
apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: ts-dpa spec: backupLocations: - velero: default: true objectStorage: bucket: oadpbucketoadp-66980-qtrj2 prefix: velero1 credential: key: cloud name: cloud-credentials-gcp provider: gcp default: true configuration: velero: defaultPlugins: - gcp - openshift nodeAgent: enable: true uploaderType: kopia
4. Create a backup with new velero prefix.
Actual results:
PodVolumeBackup get's failed.
$ oc get podvolumebackup test-backup2-m5p8s -o yaml apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql creationTimestamp: "2024-01-08T11:19:23Z" generateName: test-backup2- generation: 3 labels: velero.io/backup-name: test-backup2 velero.io/backup-uid: 4e49d1f7-291f-4ccb-93f6-e53c428e7534 velero.io/pvc-uid: d6fd5fe5-44a7-4983-8f3e-921ce157f23b name: test-backup2-m5p8s namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: test-backup2 uid: 4e49d1f7-291f-4ccb-93f6-e53c428e7534 resourceVersion: "166027" uid: 95f3b5ce-6ccc-4d8b-bed3-578fa635c3b9 spec: backupStorageLocation: ts-dpa-1 node: oadp-66980-qtrj2-worker-b-4zjnz.c.openshift-qe.internal pod: kind: Pod name: mysql-84d9554b8b-pmwff namespace: ocp-todolist-mariadb uid: 2be83138-aef1-4f3a-b992-f942a1717445 repoIdentifier: gs:oadpbucketoadp-66980-qtrj2:/velero/restic/ocp-todolist-mariadb tags: backup: test-backup2 backup-uid: 4e49d1f7-291f-4ccb-93f6-e53c428e7534 ns: ocp-todolist-mariadb pod: mysql-84d9554b8b-pmwff pod-uid: 2be83138-aef1-4f3a-b992-f942a1717445 pvc-uid: d6fd5fe5-44a7-4983-8f3e-921ce157f23b volume: mysql-data uploaderType: kopia volume: mysql-data status: completionTimestamp: "2024-01-08T11:19:25Z" message: 'error to initialize data path: error to boost backup repository connection ts-dpa-1-ocp-todolist-mariadb-kopia: error to connect backup repo: error to connect repo with storage: error to connect to repository: repository not initialized in the provided storage' phase: Failed progress: {} startTimestamp: "2024-01-08T11:19:23Z"
BackupRepository is still pointing to the older resticIdentifier
$ oc get backuprepositories -o yaml ocp-todolist-mariadb-ts-dpa-1-kopia-t524v apiVersion: velero.io/v1 kind: BackupRepository metadata: creationTimestamp: "2024-01-08T10:53:45Z" generateName: ocp-todolist-mariadb-ts-dpa-1-kopia- generation: 3 labels: velero.io/repository-type: kopia velero.io/storage-location: ts-dpa-1 velero.io/volume-namespace: ocp-todolist-mariadb name: ocp-todolist-mariadb-ts-dpa-1-kopia-t524v namespace: openshift-adp resourceVersion: "156619" uid: bee3f24b-0105-4150-ae57-f5867b869f01 spec: backupStorageLocation: ts-dpa-1 maintenanceFrequency: 1h0m0s repositoryType: kopia resticIdentifier: gs:oadpbucketoadp-66980-qtrj2:/velero/restic/ocp-todolist-mariadb volumeNamespace: ocp-todolist-mariadb status: lastMaintenanceTime: "2024-01-08T10:53:49Z" phase: Ready
Expected results:
Backup should get completed successfully.
Velero deployment log should indicate repository validation check: " Invalidating Backup Repository "
https://redhat-internal.slack.com/archives/C0144ECKUJ0/p1722357858918149 (thank you Tiger)
https://github.com/vmware-tanzu/velero/blob/d9ca14747925630664c9e4f85a682b5fc356806d/pkg/controller/backup_repository_controller.go#L104
Additional info:
- clones
-
OADP-5062 Velero doesn't update/re-create backupRepositories 1.5.0
- New