• Icon: Sub-task Sub-task
    • Resolution: Done
    • Icon: Undefined Undefined
    • OADP 1.4.1
    • None
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • ToDo
    • 0
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown

      Description of problem:

      I noticed a test started failing recently due to recent message field change. Looks colon has been added recently to the podVolumeBackup message field. 

      status:
        completionTimestamp: "2024-06-04T11:19:01Z"
        message: ': get a podvolumebackup with status "InProgress" during the server starting,
          mark it as "Failed"'
        phase: Failed
        progress: {}
        startTimestamp: "2024-06-04T11:18:56Z"

       

      Version-Release number of selected component (if applicable):
      OADP 1.4.0 (Installed via oadp-1.4 branch)
      OCP 4.16 

       

      How reproducible:
      Always 

       

      Steps to Reproduce:
      1. Create a DPA with setting smaller limits on nodeAgent pod. 

      $ oc get dpa ts-dpa -o yaml
      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        creationTimestamp: "2024-06-04T11:55:23Z"
        generation: 1
        name: ts-dpa
        namespace: openshift-adp
        resourceVersion: "162377"
        uid: 9257dc14-2568-4abe-a2b8-410432145aa5
      spec:
        backupLocations:
        - velero:
            credential:
              key: cloud
              name: cloud-credentials-gcp
            default: true
            objectStorage:
              bucket: oadp82541zqmld
              prefix: velero-e2e-51ee78d6-2269-11ef-bd2e-845cf3eff33a
            provider: gcp
        configuration:
          nodeAgent:
            enable: true
            podConfig:
              resourceAllocations:
                limits:
                  cpu: 100m
                  memory: 50Mi
                requests:
                  cpu: 50m
                  memory: 10Mi
            uploaderType: restic
          velero:
            defaultPlugins:
            - openshift
            - gcp
            - kubevirt
        podDnsConfig: {}
        snapshotLocations: []
      status:
        conditions:
        - lastTransitionTime: "2024-06-04T11:55:23Z"
          message: Reconcile complete
          reason: Complete
          status: "True"

      2. Deploy a stateful application 

      $ oc get pod -n test-oadp-231
      NAME                              READY   STATUS      RESTARTS   AGE
      django-psql-persistent-1-build    0/1     Completed   0          2m18s
      django-psql-persistent-1-deploy   0/1     Completed   0          105s
      django-psql-persistent-1-wbwhh    1/1     Running     0          104s
      postgresql-1-deploy               0/1     Completed   0          2m17s
      postgresql-1-msnw8                1/1     Running     0          2m16s

      3. Create filesystem backup (Note:- Backup is expected to fail in this case as we have set very small limits)

      oc get backup backup1-5294a049-2269-11ef-bd2e-845cf3eff33a  -o yaml
      apiVersion: velero.io/v1
      kind: Backup
      metadata:
        annotations:
          velero.io/resource-timeout: 10m0s
          velero.io/source-cluster-k8s-gitversion: v1.29.5+87992f4
          velero.io/source-cluster-k8s-major-version: "1"
          velero.io/source-cluster-k8s-minor-version: "29"
        creationTimestamp: "2024-06-04T11:57:14Z"
        generation: 7
        labels:
          velero.io/storage-location: ts-dpa-1
        name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
        namespace: openshift-adp
        resourceVersion: "163387"
        uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
      spec:
        csiSnapshotTimeout: 10m0s
        defaultVolumesToFsBackup: true
        hooks: {}
        includedNamespaces:
        - test-oadp-231
        itemOperationTimeout: 4h0m0s
        metadata: {}
        snapshotMoveData: false
        storageLocation: ts-dpa-1
        ttl: 720h0m0s
      status:
        completionTimestamp: "2024-06-04T11:57:26Z"
        errors: 1
        expiration: "2024-07-04T11:57:14Z"
        formatVersion: 1.1.0
        hookStatus: {}
        phase: PartiallyFailed
        progress:
          itemsBackedUp: 90
          totalItems: 90
        startTimestamp: "2024-06-04T11:57:14Z"
        version: 1
        warnings: 4

      Actual results:

      PodVolumeBackup CR has incorrect message. 

      oc get podvolumebackup -o yaml backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
      apiVersion: velero.io/v1
      kind: PodVolumeBackup
      metadata:
        annotations:
          velero.io/pvc-name: postgresql
        creationTimestamp: "2024-06-04T11:57:15Z"
        generateName: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-
        generation: 3
        labels:
          velero.io/backup-name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          velero.io/backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
          velero.io/pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
        name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
        namespace: openshift-adp
        ownerReferences:
        - apiVersion: velero.io/v1
          controller: true
          kind: Backup
          name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
        resourceVersion: "163355"
        uid: f721d4b5-1ab4-44f0-8775-5cddbc0bf685
      spec:
        backupStorageLocation: ts-dpa-1
        node: oadp-82541-zqmld-worker-a-2zbhr
        pod:
          kind: Pod
          name: postgresql-1-msnw8
          namespace: test-oadp-231
          uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
        repoIdentifier: gs:oadp82541zqmld:/velero-e2e-7789e47c-225f-11ef-b036-845cf3eff33a/restic/test-oadp-231
        tags:
          backup: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
          backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
          ns: test-oadp-231
          pod: postgresql-1-msnw8
          pod-uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
          pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
          volume: postgresql-data
        uploaderType: restic
        volume: postgresql-data
      status:
        completionTimestamp: "2024-06-04T11:57:20Z"
        message: ': get a podvolumebackup with status "InProgress" during the server starting,
          mark it as "Failed"'
        phase: Failed
        progress: {}
        startTimestamp: "2024-06-04T11:57:15Z"
      

       

       

       

      Expected results:

      Replace the get word with found and remove the colon

      Message:- found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\" 

       

      Additional info:

            [OADP-4680] [IBM QE-Z] Verify Bug OADP-4224 - PodVolumeBackup has incorrect message

            Ukthi Prasad added a comment - - edited

            Verified with 1.4.1-20

            1. Create a DPA with setting smaller limits on nodeAgent pod.

                  

             

            oc get dpa  -o yaml 
            apiVersion: v1
            items:
            - apiVersion: oadp.openshift.io/v1alpha1
              kind: DataProtectionApplication
              metadata:
                annotations:
                  kubectl.kubernetes.io/last-applied-configuration: |
                    {"apiVersion":"oadp.openshift.io/v1alpha1","kind":"DataProtectionApplication","metadata":{"annotations":{},"name":"example-velero","namespace":"openshift-adp"},"spec":{"backupLocations":[{"velero":{"config":{"profile":"default","region":"us-east-1"},"credential":{"key":"cloud","name":"cloud-credentials"},"default":true,"objectStorage":{"bucket":"oadpukthi","prefix":"velero"},"provider":"aws"}}],"configuration":{"nodeAgent":{"enable":true,"podConfig":{"resourceAllocations":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"50m","memory":"10Mi"}}},"uploaderType":"kopia"},"velero":{"defaultPlugins":["openshift","aws"]}}}}
                creationTimestamp: "2024-08-28T05:45:42Z"
                generation: 1
                name: example-velero
                namespace: openshift-adp
                resourceVersion: "1169153"
                uid: 5efaad9f-7d25-49f2-b91f-9d7b4b80f8e8
              spec:
                backupLocations:
                - velero:
                    config:
                      profile: default
                      region: us-east-1
                    credential:
                      key: cloud
                      name: cloud-credentials
                    default: true
                    objectStorage:
                      bucket: oadpukthi
                      prefix: velero
                    provider: aws
                configuration:
                  nodeAgent:
                    enable: true
                    podConfig:
                      resourceAllocations:
                        limits:
                          cpu: 100m
                          memory: 50Mi
                        requests:
                          cpu: 50m
                          memory: 10Mi
                    uploaderType: kopia
                  velero:
                    defaultPlugins:
                    - openshift
                    - aws
              status:
                conditions:
                - lastTransitionTime: "2024-08-28T05:45:43Z"
                  message: Reconcile complete
                  reason: Complete
                  status: "True"
                  type: Reconciled
            kind: List
            metadata:
              resourceVersion: ""
            

             

             

            2. Deploy a stateful application 

             

             

             oc get all -n mysql-persistent
            Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+
            NAME                         READY   STATUS    RESTARTS   AGE
            pod/mysql-6b49bd67c7-snrb9   1/1     Running   0          2m20s
             
            NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
            service/mysql   ClusterIP   172.30.238.224   <none>        3306/TCP   2m21s
             
            NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
            deployment.apps/mysql   1/1     1            1           2m20s
             
            NAME                               DESIRED   CURRENT   READY   AGE
            replicaset.apps/mysql-6b49bd67c7   1         1         1       2m20s
            

             

            3.  Triggred a backup

             

            oc get backup -o yaml
            apiVersion: v1
            items:
            - apiVersion: velero.io/v1
              kind: Backup
              metadata:
                annotations:
                  kubectl.kubernetes.io/last-applied-configuration: |
                    {"apiVersion":"velero.io/v1","kind":"Backup","metadata":{"annotations":{},"labels":{"velero.io/storage-location":"default"},"name":"backup1","namespace":"openshift-adp"},"spec":{"defaultVolumesToFsBackup":true,"hooks":{},"includedNamespaces":["mysql-persistent"],"storageLocation":"example-velero-1","ttl":"720h0m0s"}}
                  velero.io/resource-timeout: 10m0s
                  velero.io/source-cluster-k8s-gitversion: v1.28.9+2f7b992
                  velero.io/source-cluster-k8s-major-version: "1"
                  velero.io/source-cluster-k8s-minor-version: "28"
                creationTimestamp: "2024-08-28T05:51:03Z"
                generation: 6
                labels:
                  velero.io/storage-location: example-velero-1
                name: backup1
                namespace: openshift-adp
                resourceVersion: "1172416"
                uid: 54b39f3e-28b5-4495-9f93-3546b0749417
              spec:
                csiSnapshotTimeout: 10m0s
                defaultVolumesToFsBackup: true
                hooks: {}
                includedNamespaces:
                - mysql-persistent
                itemOperationTimeout: 4h0m0s
                snapshotMoveData: false
                storageLocation: example-velero-1
                ttl: 720h0m0s
              status:
                completionTimestamp: "2024-08-28T05:51:38Z"
                errors: 1
                expiration: "2024-09-27T05:51:04Z"
                formatVersion: 1.1.0
                hookStatus: {}
                phase: PartiallyFailed
                progress:
                  itemsBackedUp: 34
                  totalItems: 34
                startTimestamp: "2024-08-28T05:51:04Z"
                version: 1
            kind: List
            metadata:
              resourceVersion: ""
             
             
            

             

             

            oc get podvolumebackup -o yaml 
            apiVersion: v1
            items:
            - apiVersion: velero.io/v1
              kind: PodVolumeBackup
              metadata:
                annotations:
                  velero.io/pvc-name: mysql
                creationTimestamp: "2024-08-28T05:51:24Z"
                generateName: backup1-
                generation: 3
                labels:
                  velero.io/backup-name: backup1
                  velero.io/backup-uid: 54b39f3e-28b5-4495-9f93-3546b0749417
                  velero.io/pvc-uid: ebdb3894-10a9-4abc-b3a9-b07468420c6f
                name: backup1-jcgwh
                namespace: openshift-adp
                ownerReferences:
                - apiVersion: velero.io/v1
                  controller: true
                  kind: Backup
                  name: backup1
                  uid: 54b39f3e-28b5-4495-9f93-3546b0749417
                resourceVersion: "1172386"
                uid: 05bc0f13-a90c-4694-943b-c5d09612711c
              spec:
                backupStorageLocation: example-velero-1
                node: bootstrap-0.ocp-a3e07001.lnxero1.boe
                pod:
                  kind: Pod
                  name: mysql-6b49bd67c7-snrb9
                  namespace: mysql-persistent
                  uid: 2e53015d-aad6-4135-bdac-b90d2fbceb0f
                repoIdentifier: ""
                tags:
                  backup: backup1
                  backup-uid: 54b39f3e-28b5-4495-9f93-3546b0749417
                  ns: mysql-persistent
                  pod: mysql-6b49bd67c7-snrb9
                  pod-uid: 2e53015d-aad6-4135-bdac-b90d2fbceb0f
                  pvc-uid: ebdb3894-10a9-4abc-b3a9-b07468420c6f
                  volume: mysql-data
                uploaderType: kopia
                volume: mysql-data
              status:
                completionTimestamp: "2024-08-28T05:51:35Z"
                message: found a podvolumebackup with status "InProgress" during the server starting,
                  mark it as "Failed"
                phase: Failed
                progress: {}
                startTimestamp: "2024-08-28T05:51:24Z"
            kind: List
            metadata:
              resourceVersion: ""
            

             

             

            Note: Verified the log message has the keyword 'found' instead of 'get' and  doesn't have the ":" symbol in the message field.

            Ukthi Prasad added a comment - - edited Verified with 1.4.1-20 Create a DPA with setting smaller limits on nodeAgent pod.          oc get dpa  -o yaml  apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1   kind: DataProtectionApplication   metadata:     annotations:       kubectl.kubernetes.io/last-applied-configuration: |         { "apiVersion" : "oadp.openshift.io/v1alpha1" , "kind" : "DataProtectionApplication" , "metadata" :{ "annotations" :{}, "name" : "example-velero" , "namespace" : "openshift-adp" }, "spec" :{ "backupLocations" :[{ "velero" :{ "config" :{ "profile" : " default " , "region" : "us-east-1" }, "credential" :{ "key" : "cloud" , "name" : "cloud-credentials" }, " default " : true , "objectStorage" :{ "bucket" : "oadpukthi" , "prefix" : "velero" }, "provider" : "aws" }}], "configuration" :{ "nodeAgent" :{ "enable" : true , "podConfig" :{ "resourceAllocations" :{ "limits" :{ "cpu" : "100m" , "memory" : "50Mi" }, "requests" :{ "cpu" : "50m" , "memory" : "10Mi" }}}, "uploaderType" : "kopia" }, "velero" :{ "defaultPlugins" :[ "openshift" , "aws" ]}}}}     creationTimestamp: "2024-08-28T05:45:42Z"     generation: 1     name: example-velero     namespace: openshift-adp     resourceVersion: "1169153"     uid: 5efaad9f-7d25-49f2-b91f-9d7b4b80f8e8   spec:     backupLocations:     - velero:         config:           profile: default           region: us-east-1         credential:           key: cloud           name: cloud-credentials         default : true         objectStorage:           bucket: oadpukthi           prefix: velero         provider: aws     configuration:       nodeAgent:         enable: true         podConfig:           resourceAllocations:             limits:               cpu: 100m               memory: 50Mi             requests:               cpu: 50m               memory: 10Mi         uploaderType: kopia       velero:         defaultPlugins:         - openshift         - aws   status:     conditions:     - lastTransitionTime: "2024-08-28T05:45:43Z"       message: Reconcile complete       reason: Complete       status: "True"       type: Reconciled kind: List metadata:   resourceVersion: ""     2. Deploy a stateful application       oc get all -n mysql-persistent Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME                         READY   STATUS    RESTARTS   AGE pod/mysql-6b49bd67c7-snrb9   1/1     Running   0          2m20s   NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE service/mysql   ClusterIP   172.30.238.224   <none>        3306/TCP   2m21s   NAME                    READY   UP-TO-DATE   AVAILABLE   AGE deployment.apps/mysql   1/1     1            1           2m20s   NAME                               DESIRED   CURRENT   READY   AGE replicaset.apps/mysql-6b49bd67c7   1         1         1       2m20s   3.  Triggred a backup   oc get backup -o yaml apiVersion: v1 items: - apiVersion: velero.io/v1   kind: Backup   metadata:     annotations:       kubectl.kubernetes.io/last-applied-configuration: |         { "apiVersion" : "velero.io/v1" , "kind" : "Backup" , "metadata" :{ "annotations" :{}, "labels" :{ "velero.io/storage-location" : " default " }, "name" : "backup1" , "namespace" : "openshift-adp" }, "spec" :{ "defaultVolumesToFsBackup" : true , "hooks" :{}, "includedNamespaces" :[ "mysql-persistent" ], "storageLocation" : "example-velero-1" , "ttl" : "720h0m0s" }}       velero.io/resource-timeout: 10m0s       velero.io/source-cluster-k8s-gitversion: v1.28.9+2f7b992       velero.io/source-cluster-k8s-major-version: "1"       velero.io/source-cluster-k8s-minor-version: "28"     creationTimestamp: "2024-08-28T05:51:03Z"     generation: 6     labels:       velero.io/storage-location: example-velero-1     name: backup1     namespace: openshift-adp     resourceVersion: "1172416"     uid: 54b39f3e-28b5-4495-9f93-3546b0749417   spec:     csiSnapshotTimeout: 10m0s     defaultVolumesToFsBackup: true     hooks: {}     includedNamespaces:     - mysql-persistent     itemOperationTimeout: 4h0m0s     snapshotMoveData: false     storageLocation: example-velero-1     ttl: 720h0m0s   status:     completionTimestamp: "2024-08-28T05:51:38Z"     errors: 1     expiration: "2024-09-27T05:51:04Z"     formatVersion: 1.1.0     hookStatus: {}     phase: PartiallyFailed     progress:       itemsBackedUp: 34       totalItems: 34     startTimestamp: "2024-08-28T05:51:04Z"     version: 1 kind: List metadata:   resourceVersion: ""         oc get podvolumebackup -o yaml  apiVersion: v1 items: - apiVersion: velero.io/v1   kind: PodVolumeBackup   metadata:     annotations:       velero.io/pvc-name: mysql     creationTimestamp: "2024-08-28T05:51:24Z"     generateName: backup1-     generation: 3     labels:       velero.io/backup-name: backup1       velero.io/backup-uid: 54b39f3e-28b5-4495-9f93-3546b0749417       velero.io/pvc-uid: ebdb3894-10a9-4abc-b3a9-b07468420c6f     name: backup1-jcgwh     namespace: openshift-adp     ownerReferences:     - apiVersion: velero.io/v1       controller: true       kind: Backup       name: backup1       uid: 54b39f3e-28b5-4495-9f93-3546b0749417     resourceVersion: "1172386"     uid: 05bc0f13-a90c-4694-943b-c5d09612711c   spec:     backupStorageLocation: example-velero-1     node: bootstrap-0.ocp-a3e07001.lnxero1.boe     pod:       kind: Pod       name: mysql-6b49bd67c7-snrb9       namespace: mysql-persistent       uid: 2e53015d-aad6-4135-bdac-b90d2fbceb0f     repoIdentifier: ""     tags:       backup: backup1       backup-uid: 54b39f3e-28b5-4495-9f93-3546b0749417       ns: mysql-persistent       pod: mysql-6b49bd67c7-snrb9       pod-uid: 2e53015d-aad6-4135-bdac-b90d2fbceb0f       pvc-uid: ebdb3894-10a9-4abc-b3a9-b07468420c6f       volume: mysql-data     uploaderType: kopia     volume: mysql-data   status:     completionTimestamp: "2024-08-28T05:51:35Z"     message: found a podvolumebackup with status "InProgress" during the server starting,       mark it as "Failed"     phase: Failed     progress: {}     startTimestamp: "2024-08-28T05:51:24Z" kind: List metadata:   resourceVersion: ""     Note: Verified the log message has the keyword 'found' instead of 'get' and  doesn't have the ":" symbol in the message field.

              uprasad@redhat.com Ukthi Prasad
              akarol@redhat.com Aziza Karol
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: