Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-4224 PodVolumeBackup has incorrect message
  3. OADP-4680

[IBM QE-Z] Verify Bug OADP-4224 - PodVolumeBackup has incorrect message

XMLWordPrintable

    • Icon: Sub-task Sub-task
    • Resolution: Done
    • Icon: Undefined Undefined
    • OADP 1.4.1
    • None
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown

      Description of problem:

      I noticed a test started failing recently due to recent message field change. Looks colon has been added recently to the podVolumeBackup message field. 

      status:
        completionTimestamp: "2024-06-04T11:19:01Z"
        message: ': get a podvolumebackup with status "InProgress" during the server starting,
          mark it as "Failed"'
        phase: Failed
        progress: {}
        startTimestamp: "2024-06-04T11:18:56Z"

       

      Version-Release number of selected component (if applicable):
      OADP 1.4.0 (Installed via oadp-1.4 branch)
      OCP 4.16 

       

      How reproducible:
      Always 

       

      Steps to Reproduce:
      1. Create a DPA with setting smaller limits on nodeAgent pod. 

      $ oc get dpa ts-dpa -o yaml
      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        creationTimestamp: "2024-06-04T11:55:23Z"
        generation: 1
        name: ts-dpa
        namespace: openshift-adp
        resourceVersion: "162377"
        uid: 9257dc14-2568-4abe-a2b8-410432145aa5
      spec:
        backupLocations:
        - velero:
            credential:
              key: cloud
              name: cloud-credentials-gcp
            default: true
            objectStorage:
              bucket: oadp82541zqmld
              prefix: velero-e2e-51ee78d6-2269-11ef-bd2e-845cf3eff33a
            provider: gcp
        configuration:
          nodeAgent:
            enable: true
            podConfig:
              resourceAllocations:
                limits:
                  cpu: 100m
                  memory: 50Mi
                requests:
                  cpu: 50m
                  memory: 10Mi
            uploaderType: restic
          velero:
            defaultPlugins:
            - openshift
            - gcp
            - kubevirt
        podDnsConfig: {}
        snapshotLocations: []
      status:
        conditions:
        - lastTransitionTime: "2024-06-04T11:55:23Z"
          message: Reconcile complete
          reason: Complete
          status: "True"

      2. Deploy a stateful application 

      $ oc get pod -n test-oadp-231
      NAME                              READY   STATUS      RESTARTS   AGE
      django-psql-persistent-1-build    0/1     Completed   0          2m18s
      django-psql-persistent-1-deploy   0/1     Completed   0          105s
      django-psql-persistent-1-wbwhh    1/1     Running     0          104s
      postgresql-1-deploy               0/1     Completed   0          2m17s
      postgresql-1-msnw8                1/1     Running     0          2m16s

      3. Create filesystem backup (Note:- Backup is expected to fail in this case as we have set very small limits)

      oc get backup backup1-5294a049-2269-11ef-bd2e-845cf3eff33a  -o yaml
      apiVersion: velero.io/v1
      kind: Backup
      metadata:
        annotations:
          velero.io/resource-timeout: 10m0s
          velero.io/source-cluster-k8s-gitversion: v1.29.5+87992f4
          velero.io/source-cluster-k8s-major-version: "1"
          velero.io/source-cluster-k8s-minor-version: "29"
        creationTimestamp: "2024-06-04T11:57:14Z"
        generation: 7
        labels:
          velero.io/storage-location: ts-dpa-1
        name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
        namespace: openshift-adp
        resourceVersion: "163387"
        uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
      spec:
        csiSnapshotTimeout: 10m0s
        defaultVolumesToFsBackup: true
        hooks: {}
        includedNamespaces:
        - test-oadp-231
        itemOperationTimeout: 4h0m0s
        metadata: {}
        snapshotMoveData: false
        storageLocation: ts-dpa-1
        ttl: 720h0m0s
      status:
        completionTimestamp: "2024-06-04T11:57:26Z"
        errors: 1
        expiration: "2024-07-04T11:57:14Z"
        formatVersion: 1.1.0
        hookStatus: {}
        phase: PartiallyFailed
        progress:
          itemsBackedUp: 90
          totalItems: 90
        startTimestamp: "2024-06-04T11:57:14Z"
        version: 1
        warnings: 4

      Actual results:

      PodVolumeBackup CR has incorrect message. 

      oc get podvolumebackup -o yaml backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
      apiVersion: velero.io/v1
      kind: PodVolumeBackup
      metadata:
        annotations:
          velero.io/pvc-name: postgresql
        creationTimestamp: "2024-06-04T11:57:15Z"
        generateName: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-
        generation: 3
        labels:
          velero.io/backup-name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          velero.io/backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
          velero.io/pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
        name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
        namespace: openshift-adp
        ownerReferences:
        - apiVersion: velero.io/v1
          controller: true
          kind: Backup
          name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
        resourceVersion: "163355"
        uid: f721d4b5-1ab4-44f0-8775-5cddbc0bf685
      spec:
        backupStorageLocation: ts-dpa-1
        node: oadp-82541-zqmld-worker-a-2zbhr
        pod:
          kind: Pod
          name: postgresql-1-msnw8
          namespace: test-oadp-231
          uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
        repoIdentifier: gs:oadp82541zqmld:/velero-e2e-7789e47c-225f-11ef-b036-845cf3eff33a/restic/test-oadp-231
        tags:
          backup: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
          ns: test-oadp-231
          pod: postgresql-1-msnw8
          pod-uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
          pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
          volume: postgresql-data
        uploaderType: restic
        volume: postgresql-data
      status:
        completionTimestamp: "2024-06-04T11:57:20Z"
        message: ': get a podvolumebackup with status "InProgress" during the server starting,
          mark it as "Failed"'
        phase: Failed
        progress: {}
        startTimestamp: "2024-06-04T11:57:15Z"
      

       

       

       

      Expected results:

      Replace the get word with found and remove the colon

      Message:- found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\" 

       

      Additional info:

              uprasad@redhat.com Ukthi Prasad
              akarol@redhat.com Aziza Karol
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: