• False
    • Hide

      None

      Show
      None
    • False
    • oadp-velero-container-1.1.1-18
    • ToDo
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown
    • No

      Description of problem:

      After creating multiple default vsclass/storage class resources, the Velero pod is crashing with nil pointer issue.

       

      Version-Release number of selected component (if applicable):

      OADP 1.1.1
      Volsync 0.5.1

       

      How reproducible:

      Always

       

      Steps to Reproduce:
      1. Create multiple default storage class or vsclass resources.
      2. Create a backup with DataMover 

      Actual results:

      Velero pod crashed with nil pointer issue.

      Expected results:

      Velero pod shouldn't get crashed due to nil pointer issue.

      Additional info:

      2022/10/17 12:39:00 error failed to wait for VolumeSnapshotBackups to be completed: volumesnapshotbackup vsb-4r2fh has failed status
      time="2022-10-17T12:39:00Z" level=error msg="volumesnapshotbackup vsb-4r2fh has failed status" backup=openshift-adp/test-datamover logSource="/remote-source/velero/app/pkg/controller/backup_controller.go:669"
      E1017 12:39:01.010379 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
      goroutine 1764 [running]:
      k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1cf7e00?, 0x32c9640})
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:74 +0x86
      k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102c0c0?})
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:48 +0x75
      panic({0x1cf7e00, 0x32c9640})
      /usr/lib/golang/src/runtime/panic.go:884 +0x212
      github.com/vmware-tanzu/velero/pkg/datamover.DeleteTempVSClass({0xc000ea29a0?, 0x2?}, {0x2362d00, 0xc0007ca7b0}, 0xc000640960)
      /remote-source/velero/app/pkg/datamover/datamover.go:139 +0xf5
      github.com/vmware-tanzu/velero/pkg/controller.(*backupController).runBackup(0xc0001e3b80, 0xc0009b80d0)
      /remote-source/velero/app/pkg/controller/backup_controller.go:673 +0xfdb
      github.com/vmware-tanzu/velero/pkg/controller.(*backupController).processBackup(0xc0001e3b80, {0xc0011ba440, 0x1c})
      /remote-source/velero/app/pkg/controller/backup_controller.go:295 +0x75c
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).processNextWorkItem(0xc000788720)
      /remote-source/velero/app/pkg/controller/generic_controller.go:132 +0xeb
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).runWorker(0xc000834ea8?)
      /remote-source/velero/app/pkg/controller/generic_controller.go:119 +0x25
      k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000834f82?)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:155 +0x3e
      k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000834fd0?, {0x235ce00, 0xc00087eb40}, 0x1, 0xc000e56300)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:156 +0xb6
      k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bc4780?, 0x3b9aca00, 0x0, 0x5?, 0x0?)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:133 +0x89
      k8s.io/apimachinery/pkg/util/wait.Until(...)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:90
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run.func2()
      /remote-source/velero/app/pkg/controller/generic_controller.go:92 +0x6e
      created by github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run
      /remote-source/velero/app/pkg/controller/generic_controller.go:91 +0x45a
      panic: runtime error: invalid memory address or nil pointer dereference [recovered]
      panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1a2f675]
      goroutine 1764 [running]:
      k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102c0c0?})
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/runtime/runtime.go:55 +0xd7
      panic({0x1cf7e00, 0x32c9640})
      /usr/lib/golang/src/runtime/panic.go:884 +0x212
      github.com/vmware-tanzu/velero/pkg/datamover.DeleteTempVSClass({0xc000ea29a0?, 0x2?}, {0x2362d00, 0xc0007ca7b0}, 0xc000640960)
      /remote-source/velero/app/pkg/datamover/datamover.go:139 +0xf5
      github.com/vmware-tanzu/velero/pkg/controller.(*backupController).runBackup(0xc0001e3b80, 0xc0009b80d0)
      /remote-source/velero/app/pkg/controller/backup_controller.go:673 +0xfdb
      github.com/vmware-tanzu/velero/pkg/controller.(*backupController).processBackup(0xc0001e3b80, {0xc0011ba440, 0x1c})
      /remote-source/velero/app/pkg/controller/backup_controller.go:295 +0x75c
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).processNextWorkItem(0xc000788720)
      /remote-source/velero/app/pkg/controller/generic_controller.go:132 +0xeb
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).runWorker(0xc000834ea8?)
      /remote-source/velero/app/pkg/controller/generic_controller.go:119 +0x25
      k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000834f82?)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:155 +0x3e
      k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000834fd0?, {0x235ce00, 0xc00087eb40}, 0x1, 0xc000e56300)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:156 +0xb6
      k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000bc4780?, 0x3b9aca00, 0x0, 0x5?, 0x0?)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:133 +0x89
      k8s.io/apimachinery/pkg/util/wait.Until(...)
      /remote-source/velero/deps/gomod/pkg/mod/k8s.io/apimachinery@v0.23.0/pkg/util/wait/wait.go:90
      github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run.func2()
      /remote-source/velero/app/pkg/controller/generic_controller.go:92 +0x6e
      created by github.com/vmware-tanzu/velero/pkg/controller.(*genericController).Run
      /remote-source/velero/app/pkg/controller/generic_controller.go:91 +0x45a
      

       

      $ oc get backup -o yaml
      spec:
      csiSnapshotTimeout: 10m0s
      defaultVolumesToRestic: false
      hooks: {}
      includedNamespaces:
      
      oadp-812
      storageLocation: ts-1
      ttl: 720h0m0s
      status:
      completionTimestamp: "2022-10-17T12:39:16Z"
      expiration: "2022-11-16T12:38:00Z"
      failureReason: get a backup with status "InProgress" during the server starting, mark it as "Failed"
      formatVersion: 1.1.0
      phase: Failed
      progress:
      itemsBackedUp: 46
      totalItems: 46
      startTimestamp: "2022-10-17T12:38:00Z"
      version: 1 
      $ oc get vsb -o ymal
      status:
          conditions:
          - lastTransitionTime: "2022-10-17T12:38:51Z"
            message: cannot have more than one default storageClass
            reason: Error
            status: "False"
            type: Reconciled
          phase: Failed
          sourcePVCData: {}

            [OADP-927] DataMover backup fails with nil pointer issue

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory, and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2022:8634

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:8634

            Shahaf Bahar added a comment - - edited

            Verified bug: Passed.
            Tested on OADP version 1.1.1, Volsync version 0.5.1.
            OADP bundle image:  oadp-operator-bundle-container-1.1.1-34.
            Volsync bundle image: rh-osbs/rhacm2-volsync-operator-bundle:v0.5.1-8.

            oc get csv
            NAME                     DISPLAY         VERSION   REPLACES                 PHASE
            oadp-operator.v1.1.1     OADP Operator   1.1.1     oadp-operator.v1.1.0     Succeeded
            volsync-product.v0.5.1   VolSync         0.5.1     volsync-product.v0.5.0   Succeeded 

            I create 2 volumesnapshotclass as default:

            oc get volumesnapshotclass csi-aws-example csi-aws-example-2 -o yaml
            apiVersion: v1
            items:
            - apiVersion: snapshot.storage.k8s.io/v1
              deletionPolicy: Retain
              driver: ebs.csi.aws.com
              kind: VolumeSnapshotClass
              metadata:
                annotations:
                  snapshot.storage.kubernetes.io/is-default-class: "true"
                creationTimestamp: "2022-10-27T09:38:00Z"
                generation: 1
                labels:
                  velero.io/csi-volumesnapshot-class: "true"
                name: csi-aws-example
                resourceVersion: "88204"
                uid: 23242546-cab8-4a6a-907d-fe8f7d1080be
            
            - apiVersion: snapshot.storage.k8s.io/v1
              deletionPolicy: Retain
              driver: kubernetes.io/aws-ebs
              kind: VolumeSnapshotClass
              metadata:
                annotations:
                  snapshot.storage.kubernetes.io/is-default-class: "true"
                creationTimestamp: "2022-10-27T09:40:55Z"
                generation: 1
                labels:
                  velero.io/csi-volumesnapshot-class: "true"
                name: csi-aws-example-2
                resourceVersion: "89397"
                uid: e584dbd3-0367-405b-9e7d-6ff6010dbf43
            kind: List
            metadata:
              resourceVersion: ""
             

            I created 3 storageclass as default:

            oc get storageclass                                         
            NAME              PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
            gp2 (default)     kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   3h42m
            gp2-2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   65m
            gp2-3 (default)   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   107s
            gp2-csi           ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   3h42m
            gp3-csi           ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   3h42m
            ➜  193 oc get storageclass gp2 gp2-2 gp2-3 -o yaml
            apiVersion: v1
            items:
            - allowVolumeExpansion: true
              apiVersion: storage.k8s.io/v1
              kind: StorageClass
              metadata:
                annotations:
                  storageclass.kubernetes.io/is-default-class: "true"
                creationTimestamp: "2022-10-27T07:06:33Z"
                name: gp2
                resourceVersion: "4067"
                uid: 97eab9da-6935-474a-a40c-aab298515c38
              parameters:
                encrypted: "true"
                type: gp2
              provisioner: kubernetes.io/aws-ebs
              reclaimPolicy: Delete
              volumeBindingMode: WaitForFirstConsumer
            - allowVolumeExpansion: true
            
              apiVersion: storage.k8s.io/v1
              kind: StorageClass
              metadata:
                annotations:
                  storageclass.kubernetes.io/is-default-class: "true"
                creationTimestamp: "2022-10-27T09:43:40Z"
                name: gp2-2
                resourceVersion: "90613"
                uid: de91eae2-7606-4807-9aa6-b36816f333e1
              parameters:
                encrypted: "true"
                type: gp2
              provisioner: kubernetes.io/aws-ebs
              reclaimPolicy: Delete
              volumeBindingMode: WaitForFirstConsumer
            - allowVolumeExpansion: true
            
              apiVersion: storage.k8s.io/v1
              kind: StorageClass
              metadata:
                annotations:
                  storageclass.kubernetes.io/is-default-class: "true"
                creationTimestamp: "2022-10-27T10:47:22Z"
                name: gp2-3
                resourceVersion: "119326"
                uid: b0b0c93a-7630-4e14-80d0-7d6976b16689
              parameters:
                encrypted: "true"
                type: gp2
              provisioner: ebs.csi.aws.com
              reclaimPolicy: Delete
              volumeBindingMode: WaitForFirstConsumer
            kind: List
            metadata:
              resourceVersion: "" 

            I created Data-Mover backup, and it fail as expected without the Velero pod being crashed with nil pointer issue:

            oc get backups backup2 -o jsonpath='{.status.phase..}'  
            PartiallyFailed 
            oc get volumesnapshotbackup -n mysql-persistent  vsb-fbrdq -o jsonpath='{.status.phase..}'
            Failed 
            oc get volumesnapshotbackup -n mysql-persistent  vsb-fbrdq -o jsonpath='{.status.conditions..message}' 
            cannot have more than one default volumeSnapshotClass
            
            oc get po
            NAME                                                READY   STATUS    RESTARTS   AGE
            openshift-adp-controller-manager-7dd5746c89-hqc8v   1/1     Running   0          3h27m
            restic-cz6tk                                        1/1     Running   0          75m
            restic-gs5p7                                        1/1     Running   0          75m
            restic-smsmt                                        1/1     Running   0          75m
            velero-64d6bf8f68-bd6bp                             1/1     Running   0          75m
            volume-snapshot-mover-5bb465cfb5-tzdqc              1/1     Running   0          75m 

            Shahaf Bahar added a comment - - edited Verified bug: Passed . Tested on OADP version 1.1.1, Volsync version 0.5.1. OADP bundle image:  oadp-operator-bundle-container-1.1.1-34. Volsync bundle image: rh-osbs/rhacm2-volsync-operator-bundle:v0.5.1-8. oc get csv NAME                     DISPLAY         VERSION   REPLACES                 PHASE oadp- operator .v1.1.1     OADP Operator   1.1.1     oadp- operator .v1.1.0     Succeeded volsync-product.v0.5.1   VolSync         0.5.1     volsync-product.v0.5.0   Succeeded I create 2 volumesnapshotclass as default: oc get volumesnapshotclass csi-aws-example csi-aws-example-2 -o yaml apiVersion: v1 items: - apiVersion: snapshot.storage.k8s.io/v1   deletionPolicy: Retain   driver: ebs.csi.aws.com   kind: VolumeSnapshotClass   metadata:     annotations:       snapshot.storage.kubernetes.io/is- default -class: " true "     creationTimestamp: "2022-10-27T09:38:00Z"     generation: 1     labels:       velero.io/csi-volumesnapshot-class: " true "     name: csi-aws-example     resourceVersion: "88204"     uid: 23242546-cab8-4a6a-907d-fe8f7d1080be - apiVersion: snapshot.storage.k8s.io/v1   deletionPolicy: Retain   driver: kubernetes.io/aws-ebs   kind: VolumeSnapshotClass   metadata:     annotations:       snapshot.storage.kubernetes.io/is- default -class: " true "     creationTimestamp: "2022-10-27T09:40:55Z"     generation: 1     labels:       velero.io/csi-volumesnapshot-class: " true "     name: csi-aws-example-2     resourceVersion: "89397"     uid: e584dbd3-0367-405b-9e7d-6ff6010dbf43 kind: List metadata:   resourceVersion: "" I created 3 storageclass as default: oc get storageclass                                          NAME              PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE gp2 ( default )     kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   3h42m gp2-2 ( default )   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   true                   65m gp2-3 ( default )   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   107s gp2-csi           ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   3h42m gp3-csi           ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   3h42m ➜  193 oc get storageclass gp2 gp2-2 gp2-3 -o yaml apiVersion: v1 items: - allowVolumeExpansion: true   apiVersion: storage.k8s.io/v1   kind: StorageClass   metadata:     annotations:       storageclass.kubernetes.io/is- default -class: " true "     creationTimestamp: "2022-10-27T07:06:33Z"     name: gp2     resourceVersion: "4067"     uid: 97eab9da-6935-474a-a40c-aab298515c38   parameters:     encrypted: " true "     type: gp2   provisioner: kubernetes.io/aws-ebs   reclaimPolicy: Delete   volumeBindingMode: WaitForFirstConsumer - allowVolumeExpansion: true   apiVersion: storage.k8s.io/v1   kind: StorageClass   metadata:     annotations:       storageclass.kubernetes.io/is- default -class: " true "     creationTimestamp: "2022-10-27T09:43:40Z"     name: gp2-2     resourceVersion: "90613"     uid: de91eae2-7606-4807-9aa6-b36816f333e1   parameters:     encrypted: " true "     type: gp2   provisioner: kubernetes.io/aws-ebs   reclaimPolicy: Delete   volumeBindingMode: WaitForFirstConsumer - allowVolumeExpansion: true   apiVersion: storage.k8s.io/v1   kind: StorageClass   metadata:     annotations:       storageclass.kubernetes.io/is- default -class: " true "     creationTimestamp: "2022-10-27T10:47:22Z"     name: gp2-3     resourceVersion: "119326"     uid: b0b0c93a-7630-4e14-80d0-7d6976b16689   parameters:     encrypted: " true "     type: gp2   provisioner: ebs.csi.aws.com   reclaimPolicy: Delete   volumeBindingMode: WaitForFirstConsumer kind: List metadata:   resourceVersion: "" I created Data-Mover backup, and it fail as expected without the Velero pod being crashed with nil pointer issue: oc get backups backup2 -o jsonpath= '{.status.phase..}'   PartiallyFailed oc get volumesnapshotbackup -n mysql-persistent  vsb-fbrdq -o jsonpath= '{.status.phase..}' Failed oc get volumesnapshotbackup -n mysql-persistent  vsb-fbrdq -o jsonpath= '{.status.conditions..message}' cannot have more than one default volumeSnapshotClass oc get po NAME                                                READY   STATUS    RESTARTS   AGE openshift-adp-controller-manager-7dd5746c89-hqc8v   1/1     Running   0          3h27m restic-cz6tk                                        1/1     Running   0          75m restic-gs5p7                                        1/1     Running   0          75m restic-smsmt                                        1/1     Running   0          75m velero-64d6bf8f68-bd6bp                             1/1     Running   0          75m volume-snapshot-mover-5bb465cfb5-tzdqc              1/1     Running   0          75m

            Maya Peretz added a comment - - edited

            currently experiencing similar issue with stateless apps that don't have PVCs:

            [mperetz@fedora jenkins-jcasc-n]$ oc get pods -n openshift-adp
            NAME                                              READY   STATUS    RESTARTS      AGE
            openshift-adp-controller-manager-c6b586c4-44f6c   1/1     Running   0             3h52m
            restic-5gf6r                                      1/1     Running   0             12m
            restic-hbqnm                                      1/1     Running   0             12m
            restic-x22zp                                      1/1     Running   0             12m
            velero-854cf5d4c9-fmf59                           1/1     Running   1 (11m ago)   12m
            volume-snapshot-mover-5665464554-cnbhp            1/1     Running   0             12m
            vsb-5sdgv-pod                                     1/1     Running   0             146m
            vsb-fsm64-pod                                     1/1     Running   0             3h27m
            vsb-m6mbr-pod                                     1/1     Running   0             159m
            [mperetz@fedora jenkins-jcasc-n]$  
            [mperetz@fedora jenkins-jcasc-n]$ oc logs deploy/velero -n openshift-adp --previous | grep error
            Defaulted container "velero" out of: velero, openshift-velero-plugin (init), velero-plugin-for-microsoft-azure (init), kubevirt-velero-plugin (init), velero-plugin-for-csi (init)
            time="2022-10-18T13:39:02Z" level=error msg="Current BackupStorageLocations available/unavailable/unknown: 0/0/1)" controller=backup-storage-location logSource="/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:173"
            E1018 13:39:42.787085       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
            panic: runtime error: invalid memory address or nil pointer dereference [recovered]
            	panic: runtime error: invalid memory address or nil pointer dereference
            [mperetz@fedora jenkins-jcasc-n]$  

            expected behavior here without pvcs is that it will do a normal backup of the k8s objects and skip the datamover part, same as with native CSI/restic. This functionality worked before:
            https://reportportal-migration-qe.apps.ocp-c1.prod.psi.redhat.com/ui/#oadp/launches/121/2137/59873/log

            now it fails:

            https://reportportal-migration-qe.apps.ocp-c1.prod.psi.redhat.com/ui/#oadp/launches/152/2278/70657/log

             

             

            Maya Peretz added a comment - - edited currently experiencing similar issue with stateless apps that don't have PVCs: [mperetz@fedora jenkins-jcasc-n]$ oc get pods -n openshift-adp NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-c6b586c4-44f6c 1/1 Running 0 3h52m restic-5gf6r 1/1 Running 0 12m restic-hbqnm 1/1 Running 0 12m restic-x22zp 1/1 Running 0 12m velero-854cf5d4c9-fmf59 1/1 Running 1 (11m ago) 12m volume-snapshot-mover-5665464554-cnbhp 1/1 Running 0 12m vsb-5sdgv-pod 1/1 Running 0 146m vsb-fsm64-pod 1/1 Running 0 3h27m vsb-m6mbr-pod 1/1 Running 0 159m [mperetz@fedora jenkins-jcasc-n]$ [mperetz@fedora jenkins-jcasc-n]$ oc logs deploy/velero -n openshift-adp --previous | grep error Defaulted container "velero" out of: velero, openshift-velero-plugin (init), velero-plugin-for-microsoft-azure (init), kubevirt-velero-plugin (init), velero-plugin-for-csi (init) time="2022-10-18T13:39:02Z" level=error msg="Current BackupStorageLocations available/unavailable/unknown: 0/0/1)" controller=backup-storage-location logSource="/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:173" E1018 13:39:42.787085 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [mperetz@fedora jenkins-jcasc-n]$ expected behavior here without pvcs is that it will do a normal backup of the k8s objects and skip the datamover part, same as with native CSI/restic. This functionality worked before: https://reportportal-migration-qe.apps.ocp-c1.prod.psi.redhat.com/ui/#oadp/launches/121/2137/59873/log now it fails: https://reportportal-migration-qe.apps.ocp-c1.prod.psi.redhat.com/ui/#oadp/launches/152/2278/70657/log    

              emcmulla@redhat.com Emily McMullan
              rhn-support-prajoshi Prasad Joshi
              Shahaf Bahar Shahaf Bahar
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: