-
Bug
-
Resolution: Done
-
Critical
-
OADP 1.1.0
-
None
-
False
-
-
False
-
oadp-operator-bundle-container-1.1.0-64
-
Passed
-
0
-
0
-
Very Likely
-
0
-
None
-
Multiarch Incompatibility
-
No
Description of problem:
The volume-snapshot-mover pod goes in CrashLoopBackOff state on P/Z platforms when dataMover is enabled in the DataProtectionApplication.
[root@rdr-sg-oadp-e8bc-tok04-bastion-0 ~]# oc get pods -n openshift-adp | grep volume-snapshot-mover volume-snapshot-mover-64cdcf4b97-knpgg 0/1 CrashLoopBackOff 5 (2m39s ago) 6m7s [root@rdr-sg-oadp-e8bc-tok04-bastion-0 ~]# oc logs volume-snapshot-mover-64cdcf4b97-knpgg -n openshift-adp exec /manager: exec format error
[root@rdr-sg-oadp-e8bc-tok04-bastion-0 ~]# oc describe volume-snapshot-mover-64cdcf4b97-knpgg -n openshift-adp Name: volume-snapshot-mover-64cdcf4b97-knpgg Namespace: openshift-adp Priority: 0 Node: tok04-worker-0.rdr-sg-oadp-e8bc.ibm.com/193.168.200.178 Start Time: Tue, 09 Aug 2022 09:21:02 -0400 Labels: component=data-mover-controller pod-template-hash=64cdcf4b97 . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m2s default-scheduler Successfully assigned openshift-adp/volume-snapshot-mover-64cdcf4b97-knpgg to tok04-worker-0.rdr-sg-oadp-e8bc.ibm.com by tok04-master-0.rdr-sg-oadp-e8bc.ibm.com Normal AddedInterface 7m multus Add eth0 [10.129.2.150/23] from openshift-sdn Normal Pulled 6m57s kubelet Successfully pulled image "quay.io/konveyor/volume-snapshot-mover:latest" in 2.68351867s Normal Pulled 6m53s kubelet Successfully pulled image "quay.io/konveyor/volume-snapshot-mover:latest" in 2.597718778s Normal Pulled 6m35s kubelet Successfully pulled image "quay.io/konveyor/volume-snapshot-mover:latest" in 2.590410376s Normal Pulled 6m5s kubelet Successfully pulled image "quay.io/konveyor/volume-snapshot-mover:latest" in 2.591963509s Normal Created 6m4s (x4 over 6m56s) kubelet Created container data-mover-controller-container Normal Started 6m4s (x4 over 6m56s) kubelet Started container data-mover-controller-container Normal Pulling 5m11s (x5 over 7m) kubelet Pulling image "quay.io/konveyor/volume-snapshot-mover:latest" Warning BackOff 110s (x24 over 6m51s) kubelet Back-off restarting failed container
Version-Release number of selected component (if applicable):
Tested with Downstream build 1.1-50 on OCP 4.11
Architectures - ppc64le/s390x
How reproducible:
Steps to Reproduce:
1. Create DPA with spec.features.dataMover.enable set to true on a ppc64le/s390x OCP 4.11 cluster
cat <<EOF | oc create -f - apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dap-sample namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: bucket-validation-team prefix: velero config: region: us-south credential: name: cloud-credentials key: cloud configuration: restic: enable: true velero: defaultPlugins: - openshift - csi - aws features: dataMover: enable: true kind: List metadata: resourceVersion: "" selfLink: "" EOF
2. Run `VOLUME_SNAPSHOT_MOVER=$(oc get pods -n openshift-adp -oname | grep volume-snapshot-mover)`
3. Check pod logs - `oc logs $VOLUME_SNAPSHOT_MOVER -n openshift-adp`
Actual results:
Expected results:
Additional info: