Extract /home/jenkins/oadp-e2e-qe.tar.gz to /alabama/cspi Extract /home/jenkins/oadp-apps-deployer.tar.gz to /alabama/oadpApps Extract /home/jenkins/mtc-python-client.tar.gz to /alabama/pyclient Create and populate /tmp/test-settings... Login as Kubeadmin to the test cluster at https://api.ci-op-kgytzj8j-66a7a.cspilp.interop.ccitredhat.com:6443... WARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 70 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". Create virtual environment and install required packages... Collecting ansible_runner Downloading ansible_runner-2.3.2-py3-none-any.whl (80 kB) Collecting six Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting pexpect>=4.5 Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB) Collecting pyyaml Downloading PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB) Collecting packaging Downloading packaging-23.1-py3-none-any.whl (48 kB) Collecting python-daemon Downloading python_daemon-3.0.1-py3-none-any.whl (31 kB) Collecting ptyprocess>=0.5 Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Collecting setuptools>=62.4.0 Downloading setuptools-67.8.0-py3-none-any.whl (1.1 MB) Collecting lockfile>=0.10 Downloading lockfile-0.12.2-py2.py3-none-any.whl (13 kB) Collecting docutils Downloading docutils-0.20.1-py3-none-any.whl (572 kB) Installing collected packages: setuptools, ptyprocess, lockfile, docutils, six, pyyaml, python-daemon, pexpect, packaging, ansible-runner Attempting uninstall: setuptools Found existing installation: setuptools 57.4.0 Uninstalling setuptools-57.4.0: Successfully uninstalled setuptools-57.4.0 Successfully installed ansible-runner-2.3.2 docutils-0.20.1 lockfile-0.12.2 packaging-23.1 pexpect-4.8.0 ptyprocess-0.7.0 python-daemon-3.0.1 pyyaml-6.0 setuptools-67.8.0 six-1.16.0 WARNING: You are using pip version 21.2.3; however, version 23.1.2 is available. You should consider upgrading via the '/alabama/venv/bin/python3 -m pip install --upgrade pip' command. Processing /alabama/oadpApps DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555. Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing wheel metadata: started Preparing wheel metadata: finished with status 'done' Building wheels for collected packages: ocpdeployer Building wheel for ocpdeployer (PEP 517): started Building wheel for ocpdeployer (PEP 517): finished with status 'done' Created wheel for ocpdeployer: filename=ocpdeployer-0.0.1-py2.py3-none-any.whl size=188422 sha256=751360e5752fb5cfb3139408ea91864700893f00bfaf3c03bc3d1dc2fa7bf72f Stored in directory: /tmp/pip-ephem-wheel-cache-fbxzou5m/wheels/ea/a6/87/9d98fc51cc395d30bd2147a7e53e3f5a2e80044d9dd9e64977 Successfully built ocpdeployer Installing collected packages: ocpdeployer WARNING: Value for scheme.platlib does not match. Please report this to distutils: /tmp/pip-target-1ab4c54f/lib64/python sysconfig: /tmp/pip-target-1ab4c54f/lib/python WARNING: Additional context: user = False home = '/tmp/pip-target-1ab4c54f' root = None prefix = None Successfully installed ocpdeployer-0.0.1 WARNING: You are using pip version 21.2.3; however, version 23.1.2 is available. You should consider upgrading via the '/alabama/venv/bin/python3 -m pip install --upgrade pip' command. Processing /alabama/pyclient DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555. Collecting suds-py3 Downloading suds_py3-1.4.5.0-py3-none-any.whl (298 kB) Collecting requests Downloading requests-2.31.0-py3-none-any.whl (62 kB) Collecting jinja2 Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB) Collecting kubernetes==11.0.0 Downloading kubernetes-11.0.0-py3-none-any.whl (1.5 MB) Collecting openshift==0.11.2 Downloading openshift-0.11.2.tar.gz (19 kB) Requirement already satisfied: setuptools>=21.0.0 in /alabama/venv/lib/python3.10/site-packages (from kubernetes==11.0.0->mtc==0.0.1) (67.8.0) Collecting python-dateutil>=2.5.3 Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) Requirement already satisfied: pyyaml>=3.12 in /alabama/venv/lib/python3.10/site-packages (from kubernetes==11.0.0->mtc==0.0.1) (6.0) Requirement already satisfied: six>=1.9.0 in /alabama/venv/lib/python3.10/site-packages (from kubernetes==11.0.0->mtc==0.0.1) (1.16.0) Collecting google-auth>=1.0.1 Downloading google_auth-2.19.0-py2.py3-none-any.whl (181 kB) Collecting requests-oauthlib Downloading requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB) Collecting websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 Downloading websocket_client-1.5.2-py3-none-any.whl (56 kB) Collecting certifi>=14.05.14 Downloading certifi-2023.5.7-py3-none-any.whl (156 kB) Collecting urllib3>=1.24.2 Downloading urllib3-2.0.2-py3-none-any.whl (123 kB) Collecting python-string-utils Downloading python_string_utils-1.0.0-py3-none-any.whl (26 kB) Collecting ruamel.yaml>=0.15 Downloading ruamel.yaml-0.17.28-py3-none-any.whl (109 kB) Collecting urllib3>=1.24.2 Downloading urllib3-1.26.16-py2.py3-none-any.whl (143 kB) Collecting cachetools<6.0,>=2.0.0 Downloading cachetools-5.3.1-py3-none-any.whl (9.3 kB) Collecting rsa<5,>=3.1.4 Downloading rsa-4.9-py3-none-any.whl (34 kB) Collecting pyasn1-modules>=0.2.1 Downloading pyasn1_modules-0.3.0-py2.py3-none-any.whl (181 kB) Collecting pyasn1<0.6.0,>=0.4.6 Downloading pyasn1-0.5.0-py2.py3-none-any.whl (83 kB) Collecting ruamel.yaml.clib>=0.2.7 Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB) Collecting MarkupSafe>=2.0 Downloading MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB) Collecting idna<4,>=2.5 Downloading idna-3.4-py3-none-any.whl (61 kB) Collecting charset-normalizer<4,>=2 Downloading charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB) Collecting oauthlib>=3.0.0 Downloading oauthlib-3.2.2-py3-none-any.whl (151 kB) Using legacy 'setup.py install' for mtc, since package 'wheel' is not installed. Using legacy 'setup.py install' for openshift, since package 'wheel' is not installed. Installing collected packages: urllib3, pyasn1, idna, charset-normalizer, certifi, rsa, requests, pyasn1-modules, oauthlib, cachetools, websocket-client, ruamel.yaml.clib, requests-oauthlib, python-dateutil, MarkupSafe, google-auth, ruamel.yaml, python-string-utils, kubernetes, jinja2, suds-py3, openshift, mtc Running setup.py install for openshift: started Running setup.py install for openshift: finished with status 'done' Running setup.py install for mtc: started Running setup.py install for mtc: finished with status 'done' Successfully installed MarkupSafe-2.1.2 cachetools-5.3.1 certifi-2023.5.7 charset-normalizer-3.1.0 google-auth-2.19.0 idna-3.4 jinja2-3.1.2 kubernetes-11.0.0 mtc-0.0.1 oauthlib-3.2.2 openshift-0.11.2 pyasn1-0.5.0 pyasn1-modules-0.3.0 python-dateutil-2.8.2 python-string-utils-1.0.0 requests-2.31.0 requests-oauthlib-1.3.1 rsa-4.9 ruamel.yaml-0.17.28 ruamel.yaml.clib-0.2.7 suds-py3-1.4.5.0 urllib3-1.26.16 websocket-client-1.5.2 WARNING: You are using pip version 21.2.3; however, version 23.1.2 is available. You should consider upgrading via the '/alabama/venv/bin/python3 -m pip install --upgrade pip' command. Executing tests... + readonly 'RED=\e[31m' + RED='\e[31m' + readonly 'BLUE=\033[34m' + BLUE='\033[34m' + readonly 'CLEAR=\e[39m' + CLEAR='\e[39m' ++ oc get infrastructures cluster -o 'jsonpath={.status.platform}' ++ awk '{print tolower($0)}' + CLOUD_PROVIDER=aws + E2E_TIMEOUT_MULTIPLIER=2 + export NAMESPACE=openshift-adp + NAMESPACE=openshift-adp + export PROVIDER=aws + PROVIDER=aws ++ echo aws ++ awk '{print tolower($0)}' + BACKUP_LOCATION=aws + export BACKUP_LOCATION=aws + BACKUP_LOCATION=aws + export BUCKET=ci-op-kgytzj8j-interopoadp + BUCKET=ci-op-kgytzj8j-interopoadp + OADP_CREDS_FILE=/tmp/test-settings/credentials +++ readlink -f /alabama/cspi/test_settings/scripts/test_runner.sh ++ dirname /alabama/cspi/test_settings/scripts/test_runner.sh + readonly SCRIPT_DIR=/alabama/cspi/test_settings/scripts + SCRIPT_DIR=/alabama/cspi/test_settings/scripts ++ cd /alabama/cspi/test_settings/scripts ++ git rev-parse --show-toplevel + readonly TOP_DIR=/alabama/cspi + TOP_DIR=/alabama/cspi + echo /alabama/cspi /alabama/cspi + TESTS_FOLDER=e2e + TEST_FILTER='!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' + [[ aws =~ aws ]] ++ oc config current-context ++ awk -F / '{print $2}' + SETTINGS_TMP=/alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443 + mkdir -p /alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443 ++ oc get authentication cluster -o 'jsonpath={.spec.serviceAccountIssuer}' + IS_OIDC= + '[' '!' -z ']' + [[ aws == \a\w\s ]] + export PROVIDER=aws + PROVIDER=aws + export CREDS_SECRET_REF=cloud-credentials + CREDS_SECRET_REF=cloud-credentials ++ oc get infrastructures cluster -o 'jsonpath={.status.platformStatus.aws.region}' --allow-missing-template-keys=false + export REGION=us-east-1 + REGION=us-east-1 + settings_script=aws_settings.sh + '[' aws == aws-sts ']' + BUCKET=ci-op-kgytzj8j-interopoadp + TMP_DIR=/alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443 + source /alabama/cspi/test_settings/scripts/aws_settings.sh ++ cat ++ [[ aws == \a\w\s ]] ++ cat ++ echo -e '\n }\n}' +++ cat /alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443/settings.json ++ x='{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-kgytzj8j-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ echo '{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-kgytzj8j-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ grep -o '^[^#]*' + FILE_SETTINGS_NAME=settings.json + printf '\033[34mGenerated settings file under /alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443/settings.json\e[39m\n' Generated settings file under /alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443/settings.json + cat /alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443/settings.json ++ oc get volumesnapshotclass -o name + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc annotated ++ ./e2e/must-gather/get-latest-build.sh + UPSTREAM_VERSION=99.0.0 ++ oc get OperatorCondition -n openshift-adp -o 'jsonpath={.items[*].metadata.name}' ++ awk -F v '{print $2}' + OADP_VERSION=1.1.4 + '[' -z 1.1.4 ']' + '[' 1.1.4 == 99.0.0 ']' ++ oc get sub redhat-oadp-operator -n openshift-adp -o 'jsonpath={.spec.source}' + OADP_REPO=redhat-operators + '[' -z redhat-operators ']' + '[' redhat-operators == redhat-operators ']' + REGISTRY_PATH=registry.redhat.io/oadp/oadp-mustgather-rhel8: + TAG=1.1.4 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 + echo registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 + check_image_by_pull registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 ++ oc import-image mystream -n openshift --dry-run --from registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 --confirm + IMAGE_PULL='imagestream.image.openshift.io/mystream imported (dry run) Image Name: mystream:1.1.4 Docker Image: registry.redhat.io/oadp/oadp-mustgather-rhel8@sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Name: sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Annotations: image.openshift.io/dockerLayersOrder=ascending Image Size: 139.1MB in 2 layers Layers: 39.35MB sha256:28ff5ee6facbc15dc879cb26daf949072ec01118d3463efd1f991d9b92e175ef 99.7MB sha256:e167b3d2f9b0bb0f232edbba9c1e6f0e051d9012bd18386b4ae5f022cdff612b Image Created: 2 weeks ago Author: Arch: amd64 Entrypoint: /bin/sh -c /usr/bin/gather Working Dir: User: Exposes Ports: Docker Labels: architecture=x86_64 build-date=2023-05-10T11:59:57 com.redhat.component=oadp-mustgather-container com.redhat.license_terms=https://www.redhat.com/agreements description=OpenShift API for Data Protection data gathering image distribution-scope=public io.buildah.version=1.27.3 io.k8s.description=OpenShift API for Data Protection data gathering image io.k8s.display-name=OpenShift API for Data Protection - mustgather io.openshift.build.commit.id=69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.commit.url=https://github.com/openshift/oadp-operator/commit/69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.source-location=https://github.com/openshift/oadp-operator.git io.openshift.expose-services= io.openshift.tags=data,images maintainer=OpenShift API for Data Protection Team name=oadp/oadp-mustgather-rhel8 release=14 summary=OpenShift API for Data Protection data gathering image url=https://access.redhat.com/containers/#/registry.access.redhat.com/oadp/oadp-mustgather-rhel8/images/1.1.4-14 vcs-ref=4eb3192003e55b037f514604c4e3da6e463fd55d vcs-type=git vendor=Red Hat, Inc. version=1.1.4 Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin container=oci' + ERR='imagestream.image.openshift.io/mystream imported (dry run) Image Name: mystream:1.1.4 Docker Image: registry.redhat.io/oadp/oadp-mustgather-rhel8@sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Name: sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Annotations: image.openshift.io/dockerLayersOrder=ascending Image Size: 139.1MB in 2 layers Layers: 39.35MB sha256:28ff5ee6facbc15dc879cb26daf949072ec01118d3463efd1f991d9b92e175ef 99.7MB sha256:e167b3d2f9b0bb0f232edbba9c1e6f0e051d9012bd18386b4ae5f022cdff612b Image Created: 2 weeks ago Author: Arch: amd64 Entrypoint: /bin/sh -c /usr/bin/gather Working Dir: User: Exposes Ports: Docker Labels: architecture=x86_64 build-date=2023-05-10T11:59:57 com.redhat.component=oadp-mustgather-container com.redhat.license_terms=https://www.redhat.com/agreements description=OpenShift API for Data Protection data gathering image distribution-scope=public io.buildah.version=1.27.3 io.k8s.description=OpenShift API for Data Protection data gathering image io.k8s.display-name=OpenShift API for Data Protection - mustgather io.openshift.build.commit.id=69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.commit.url=https://github.com/openshift/oadp-operator/commit/69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.source-location=https://github.com/openshift/oadp-operator.git io.openshift.expose-services= io.openshift.tags=data,images maintainer=OpenShift API for Data Protection Team name=oadp/oadp-mustgather-rhel8 release=14 summary=OpenShift API for Data Protection data gathering image url=https://access.redhat.com/containers/#/registry.access.redhat.com/oadp/oadp-mustgather-rhel8/images/1.1.4-14 vcs-ref=4eb3192003e55b037f514604c4e3da6e463fd55d vcs-type=git vendor=Red Hat, Inc. version=1.1.4 Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin container=oci' + RESULT=0 + '[' 0 == 0 ']' ++ echo 'imagestream.image.openshift.io/mystream imported (dry run) Image Name: mystream:1.1.4 Docker Image: registry.redhat.io/oadp/oadp-mustgather-rhel8@sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Name: sha256:3bdbc83826e25c356fb6c91cc8c1cc90390daca9d23bee6aeed71b9b7423ce2a Annotations: image.openshift.io/dockerLayersOrder=ascending Image Size: 139.1MB in 2 layers Layers: 39.35MB sha256:28ff5ee6facbc15dc879cb26daf949072ec01118d3463efd1f991d9b92e175ef 99.7MB sha256:e167b3d2f9b0bb0f232edbba9c1e6f0e051d9012bd18386b4ae5f022cdff612b Image Created: 2 weeks ago Author: Arch: amd64 Entrypoint: /bin/sh -c /usr/bin/gather Working Dir: User: Exposes Ports: Docker Labels: architecture=x86_64 build-date=2023-05-10T11:59:57 com.redhat.component=oadp-mustgather-container com.redhat.license_terms=https://www.redhat.com/agreements description=OpenShift API for Data Protection data gathering image distribution-scope=public io.buildah.version=1.27.3 io.k8s.description=OpenShift API for Data Protection data gathering image io.k8s.display-name=OpenShift API for Data Protection - mustgather io.openshift.build.commit.id=69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.commit.url=https://github.com/openshift/oadp-operator/commit/69b33048fa6ab901470e9a1a8425bc983715206c io.openshift.build.source-location=https://github.com/openshift/oadp-operator.git io.openshift.expose-services= io.openshift.tags=data,images maintainer=OpenShift API for Data Protection Team name=oadp/oadp-mustgather-rhel8 release=14 summary=OpenShift API for Data Protection data gathering image url=https://access.redhat.com/containers/#/registry.access.redhat.com/oadp/oadp-mustgather-rhel8/images/1.1.4-14 vcs-ref=4eb3192003e55b037f514604c4e3da6e463fd55d vcs-type=git vendor=Red Hat, Inc. version=1.1.4 Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin container=oci' ++ grep imported ++ head -n1 ++ wc -l + IMAGE_PULL=1 + '[' 1 '!=' 1 ']' + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 + '[' -z registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 ']' + ginkgo run -mod=mod e2e/ -- -credentials_file=/tmp/test-settings/credentials -oadp_namespace=openshift-adp -settings=/alabama/cspi/output_files/api-ci-op-kgytzj8j-66a7a-cspilp-interop-ccitredhat-com:6443/settings.json -must_gather_image=registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4 -timeout_multiplier=2 --ginkgo.junit-report=/alabama/cspi/e2e/junit_report.xml '--ginkgo.label-filter=!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' --ginkgo.focus=test-upstream Ginkgo detected a version mismatch between the Ginkgo CLI and the version of Ginkgo imported by your packages: Ginkgo CLI Version: 2.9.5 Mismatched package versions found: 2.7.0 used by e2e Ginkgo will continue to attempt to run but you may see errors (including flag parsing errors) and should either update your go.mod or your version of the Ginkgo CLI to match. To install the matching version of the CLI run go install github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file. Alternatively you can use go run github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file to invoke the matching version of the Ginkgo CLI. If you are attempting to test multiple packages that each have a different version of the Ginkgo library with a single Ginkgo CLI that is currently unsupported.  go: downloading github.com/onsi/gomega v1.24.1 go: downloading github.com/onsi/ginkgo/v2 v2.7.0 go: downloading github.com/openshift/oadp-operator v1.0.2-0.20220818181424-03636888ff91 go: downloading github.com/vmware-tanzu/velero v1.9.0 go: downloading github.com/operator-framework/api v0.14.1-0.20220413143725-33310d6154f3 go: downloading k8s.io/api v0.24.2 go: downloading k8s.io/apimachinery v0.24.2 go: downloading k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 go: downloading sigs.k8s.io/controller-runtime v0.11.1 go: downloading k8s.io/client-go v0.24.2 go: downloading github.com/apenella/go-ansible v1.1.5 go: downloading github.com/backube/volsync v0.4.0 go: downloading github.com/konveyor/volume-snapshot-mover v0.0.0-20220708150839-4e89b7dd413e go: downloading github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0 go: downloading github.com/openshift/api v0.0.0-20220218143101-271bd7e1834c go: downloading github.com/andygrunwald/go-jira v1.16.0 go: downloading github.com/google/uuid v1.2.0 go: downloading github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3 go: downloading k8s.io/cli-runtime v0.24.2 go: downloading k8s.io/kubectl v0.24.2 go: downloading github.com/google/go-cmp v0.5.9 go: downloading github.com/evanphx/json-patch v4.12.0+incompatible go: downloading github.com/sirupsen/logrus v1.8.1 go: downloading github.com/go-logr/logr v1.2.3 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading github.com/google/gofuzz v1.2.0 go: downloading github.com/apenella/go-common-utils/error v0.0.0-20210528133155-34ba915e28c8 go: downloading github.com/apenella/go-common-utils/data v0.0.0-20210528133155-34ba915e28c8 go: downloading github.com/fatih/structs v1.1.0 go: downloading github.com/golang-jwt/jwt/v4 v4.4.2 go: downloading github.com/google/go-querystring v1.1.0 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/trivago/tgo v1.0.7 go: downloading github.com/spf13/cobra v1.4.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading golang.org/x/net v0.3.0 go: downloading k8s.io/klog/v2 v2.60.1 go: downloading sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 go: downloading k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 go: downloading golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.2.1 go: downloading github.com/blang/semver/v4 v4.0.0 go: downloading github.com/imdario/mergo v0.3.12 go: downloading golang.org/x/term v0.3.0 go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da go: downloading golang.org/x/sys v0.3.0 go: downloading sigs.k8s.io/yaml v1.3.0 go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de go: downloading github.com/golang/protobuf v1.5.2 go: downloading github.com/google/gnostic v0.5.7-v3refs go: downloading golang.org/x/text v0.5.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 go: downloading github.com/peterbourgon/diskv v2.0.1+incompatible go: downloading sigs.k8s.io/kustomize/api v0.11.4 go: downloading sigs.k8s.io/kustomize/kyaml v0.13.6 go: downloading github.com/davecgh/go-spew v1.1.1 go: downloading golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 go: downloading google.golang.org/protobuf v1.28.0 go: downloading github.com/json-iterator/go v1.1.12 go: downloading github.com/google/btree v1.0.1 go: downloading github.com/moby/spdystream v0.2.0 go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 go: downloading github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 go: downloading k8s.io/component-base v0.24.2 go: downloading github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 go: downloading github.com/MakeNowJust/heredoc v1.0.0 go: downloading github.com/mitchellh/go-wordwrap v1.0.0 go: downloading github.com/russross/blackfriday v1.5.2 go: downloading github.com/emicklei/go-restful v2.9.5+incompatible go: downloading github.com/go-openapi/swag v0.19.14 go: downloading github.com/go-openapi/jsonreference v0.19.5 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 go: downloading github.com/stretchr/testify v1.7.0 go: downloading github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca go: downloading github.com/go-errors/errors v1.0.1 go: downloading github.com/fvbommel/sortorder v1.0.1 go: downloading github.com/go-openapi/jsonpointer v0.19.5 go: downloading github.com/PuerkitoBio/purell v1.1.1 go: downloading github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d go: downloading github.com/mailru/easyjson v0.7.6 go: downloading github.com/fatih/camelcase v1.0.0 go: downloading github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 go: downloading go.starlark.net v0.0.0-20201006213952-227f4aabceb5 go: downloading github.com/pmezard/go-difflib v1.0.0 go: downloading github.com/josharian/intern v1.0.0 2023/05/29 06:42:04 Setting up clients I0529 06:42:05.240295 9975 request.go:601] Waited for 1.031714451s due to client-side throttling, not priority and fairness, request: GET:https://api.ci-op-kgytzj8j-66a7a.cspilp.interop.ccitredhat.com:6443/apis/ocs.openshift.io/v1?timeout=32s 2023/05/29 06:42:07 Getting default StorageClass... 2023/05/29 06:42:07 Got default StorageClass gp3-csi 2023/05/29 06:42:07 Using velero prefix: velero-e2e-ee65390a-fdeb-11ed-8b0b-0a580a815c29 Running Suite: OADP E2E Suite - /alabama/cspi/e2e ================================================= Random Seed: 1685342500 Will run 8 of 94 specs ------------------------------ [BeforeSuite]  /alabama/cspi/e2e/e2e_suite_test.go:83 [BeforeSuite] PASSED [0.016 seconds] ------------------------------ SSSSSSSSSS ------------------------------ [datamover] DPA deployment with different DataMover configurations DataMover [tc-id:OADP-211][smoke][test-upstream] DataMover enable disable /alabama/cspi/e2e/dpa_deploy/data_mover.go:72 STEP: Setup DPA client @ 05/29/23 06:42:07.735 2023/05/29 06:42:07 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:42:07.739 2023/05/29 06:42:07 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "5f515082-6e54-4308-bba1-59791dd95612", "resourceVersion": "35405", "generation": 1, "creationTimestamp": "2023-05-29T06:42:07Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:42:07Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:features": { ".": {}, "f:dataMover": { ".": {}, "f:enable": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": { "dataMover": { "enable": true } } }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:42:07.751 2023/05/29 06:42:07 Waiting for velero pod to be running 2023/05/29 06:42:12 pod: velero-77ff6f6f65-fsmwz is not yet running with status: {Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotInitialized containers with incomplete status: [velero-plugin-for-aws kubevirt-velero-plugin velero-plugin-for-csi]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC }] 10.0.252.151 10.131.0.25 [{10.131.0.25}] 2023-05-29 06:42:07 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:12 +0000 UTC,FinishedAt:2023-05-29 06:42:12 +0000 UTC,ContainerID:cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31 } {velero-plugin-for-aws {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:4daa41ceba8a4419da18efbd06db36756836d3bcb44d3455e26b67aaadfe109c } {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae } {velero-plugin-for-csi {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-csi-rhel8@sha256:e6c0951253e3eb1d81c97aae138a29e32983d771c8485d827719f06f66b7e65e }] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel8@sha256:14830f5f09a590333c35ab5db6a3d2799c9e5904c5b81bd3bd78d587682b2d84 0xc000853b4f}] Burstable []} 2023/05/29 06:42:17 pod: velero-77ff6f6f65-fsmwz is not yet running with status: {Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotInitialized containers with incomplete status: [kubevirt-velero-plugin velero-plugin-for-csi]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC }] 10.0.252.151 10.131.0.25 [{10.131.0.25}] 2023-05-29 06:42:07 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:12 +0000 UTC,FinishedAt:2023-05-29 06:42:12 +0000 UTC,ContainerID:cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31 } {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:15 +0000 UTC,FinishedAt:2023-05-29 06:42:15 +0000 UTC,ContainerID:cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:4daa41ceba8a4419da18efbd06db36756836d3bcb44d3455e26b67aaadfe109c registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:3610368309f43ef27867681ff7fbc6cfe8832e9ef0d66d86a98648530194eab2 cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249 } {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae } {velero-plugin-for-csi {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-csi-rhel8@sha256:e6c0951253e3eb1d81c97aae138a29e32983d771c8485d827719f06f66b7e65e }] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel8@sha256:14830f5f09a590333c35ab5db6a3d2799c9e5904c5b81bd3bd78d587682b2d84 0xc00047819f}] Burstable []} 2023/05/29 06:42:22 pod: velero-77ff6f6f65-fsmwz is not yet running with status: {Pending [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotInitialized containers with incomplete status: [velero-plugin-for-csi]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC }] 10.0.252.151 10.131.0.25 [{10.131.0.25}] 2023-05-29 06:42:07 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:12 +0000 UTC,FinishedAt:2023-05-29 06:42:12 +0000 UTC,ContainerID:cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31 } {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:15 +0000 UTC,FinishedAt:2023-05-29 06:42:15 +0000 UTC,ContainerID:cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:4daa41ceba8a4419da18efbd06db36756836d3bcb44d3455e26b67aaadfe109c registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:3610368309f43ef27867681ff7fbc6cfe8832e9ef0d66d86a98648530194eab2 cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249 } {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:20 +0000 UTC,FinishedAt:2023-05-29 06:42:20 +0000 UTC,ContainerID:cri-o://7d8cbbc522c34ff250c09e3d6ef3a6552bab471b03fcac0809347d26db79ce46,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae cri-o://7d8cbbc522c34ff250c09e3d6ef3a6552bab471b03fcac0809347d26db79ce46 } {velero-plugin-for-csi {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-csi-rhel8@sha256:e6c0951253e3eb1d81c97aae138a29e32983d771c8485d827719f06f66b7e65e }] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel8@sha256:14830f5f09a590333c35ab5db6a3d2799c9e5904c5b81bd3bd78d587682b2d84 0xc00080a63f}] Burstable []} 2023/05/29 06:42:27 pod: velero-77ff6f6f65-fsmwz is not yet running with status: {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:42:07 +0000 UTC }] 10.0.252.151 10.131.0.25 [{10.131.0.25}] 2023-05-29 06:42:07 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:12 +0000 UTC,FinishedAt:2023-05-29 06:42:12 +0000 UTC,ContainerID:cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c cri-o://7cefedc2642ae43d5d613f2842c24106080f75fd8c4a444cded3ba22639d0f31 } {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:15 +0000 UTC,FinishedAt:2023-05-29 06:42:15 +0000 UTC,ContainerID:cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:4daa41ceba8a4419da18efbd06db36756836d3bcb44d3455e26b67aaadfe109c registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:3610368309f43ef27867681ff7fbc6cfe8832e9ef0d66d86a98648530194eab2 cri-o://ac6eb850efdfe2c5301646b7a7e9e51cc57e9b71ba9ca79203d11b3836ada249 } {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:20 +0000 UTC,FinishedAt:2023-05-29 06:42:20 +0000 UTC,ContainerID:cri-o://7d8cbbc522c34ff250c09e3d6ef3a6552bab471b03fcac0809347d26db79ce46,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae cri-o://7d8cbbc522c34ff250c09e3d6ef3a6552bab471b03fcac0809347d26db79ce46 } {velero-plugin-for-csi {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:42:24 +0000 UTC,FinishedAt:2023-05-29 06:42:24 +0000 UTC,ContainerID:cri-o://22308c24d6429ee787294b8264821f303d9f539405d6e5920a5da5345d68729e,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-csi-rhel8@sha256:e6c0951253e3eb1d81c97aae138a29e32983d771c8485d827719f06f66b7e65e registry.redhat.io/oadp/oadp-velero-plugin-for-csi-rhel8@sha256:89202282dbda6e1890768f63b3f809f81f2739f85de0ff00bf8e410a55465fea cri-o://22308c24d6429ee787294b8264821f303d9f539405d6e5920a5da5345d68729e }] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel8@sha256:14830f5f09a590333c35ab5db6a3d2799c9e5904c5b81bd3bd78d587682b2d84 0xc00080aebf}] Burstable []} 2023/05/29 06:42:32 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Verify the DataMover controller is present @ 05/29/23 06:42:32.811 STEP: Remove DPA client @ 05/29/23 06:42:32.819 STEP: Create DPA resource without enabling DataMover @ 05/29/23 06:42:32.826 2023/05/29 06:42:32 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:42:32.829 2023/05/29 06:42:32 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "93dfca7a-cddb-463f-96c2-99def80b7881", "resourceVersion": "35711", "generation": 1, "creationTimestamp": "2023-05-29T06:42:32Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:42:32Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:42:32.857 2023/05/29 06:42:32 Waiting for velero pod to be running 2023/05/29 06:42:32 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 06:42:32 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "93dfca7a-cddb-463f-96c2-99def80b7881", "resourceVersion": "35711", "generation": 1, "creationTimestamp": "2023-05-29T06:42:32Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:42:32Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify the dataMover controller is not present @ 05/29/23 06:42:37.897 STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:42:37.905 2023/05/29 06:42:37 Snapclass 'example-snapclass' doesn't exist, creating 2023/05/29 06:42:37 Setting new default StorageClass 'gp2-csi' STEP: Installing application for case mysql @ 05/29/23 06:42:37.958 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] /usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 'api.ci-op-kgytzj8j-66a7a.cspilp.interop.ccitredhat.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace mysql-persistent] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (60 retries left). FAILED - RETRYING: [localhost]: Check pod status (59 retries left). FAILED - RETRYING: [localhost]: Check pod status (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=15  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 STEP: Verify Application deployment @ 05/29/23 06:43:00.568 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 STEP: Create and verify backup @ 05/29/23 06:43:04.574 STEP: Creating backup mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 @ 05/29/23 06:43:04.574 2023/05/29 06:43:04 Wait until backup mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: Completed STEP: Verify backup mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 06:44:44.659 2023/05/29 06:44:44 Backup for case mysql succeeded STEP: Create restore and verify the application is running fine @ 05/29/23 06:44:44.698 STEP: Delete the appplication resources mysql @ 05/29/23 06:44:44.698 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:45:03 Creating restore mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 for case mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 STEP: Create restore mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 from backup mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 @ 05/29/23 06:45:03.192 2023/05/29 06:45:03 Wait until restore mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29 is complete restore phase: InProgress restore phase: Completed STEP: Verify restore mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29has completed successfully @ 05/29/23 06:45:23.227 STEP: Verify Application restore @ 05/29/23 06:45:23.23 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] FAILED - RETRYING: [localhost]: Check mysql pod status (60 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2023/05/29 06:45:37 Cleaning resources 2023/05/29 06:45:37 Delete secret cloud-credentials 2023/05/29 06:45:37 Delete DPA CR 2023/05/29 06:45:37 Verify Velero pods are terminated 2023/05/29 06:45:43 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:46:01 Cleaning setup resources for the backup 2023/05/29 06:46:01 Setting new default StorageClass 'gp3-csi' 2023/05/29 06:46:01 Deleting VolumeSnapshotClass 'example-snapclass' • [233.791 seconds] ------------------------------ SSSSSS ------------------------------ Incremental backup restore tests Incremental restore pod count [test-upstream] Todolist app with Restic - policy: none /alabama/cspi/e2e/incremental_restore/backup_restore_incremental.go:97 2023/05/29 06:46:01 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:46:01.53 2023/05/29 06:46:01 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "2c45979a-3be6-4979-a06d-d1896abe291a", "resourceVersion": "38397", "generation": 1, "creationTimestamp": "2023-05-29T06:46:01Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:46:01Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ] }, "restic": { "enable": true, "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:46:01.542 2023/05/29 06:46:01 Waiting for velero pod to be running 2023/05/29 06:46:06 pod: velero-69c67df877-gqsgz is not yet running with status: {Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:46:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:46:01 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:46:01 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-29 06:46:01 +0000 UTC }] 10.0.252.151 10.131.0.26 [{10.131.0.26}] 2023-05-29 06:46:01 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:46:03 +0000 UTC,FinishedAt:2023-05-29 06:46:03 +0000 UTC,ContainerID:cri-o://367be2e1055d89fe739839d03ca4844ab9d8aca6f5629a0ab293d781f4febbf6,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c registry.redhat.io/oadp/oadp-velero-plugin-rhel8@sha256:9f1539e642fb9d27ce1dd9b95120744b43ce47124026f425d6d392a4dd00bd6c cri-o://367be2e1055d89fe739839d03ca4844ab9d8aca6f5629a0ab293d781f4febbf6 } {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:46:04 +0000 UTC,FinishedAt:2023-05-29 06:46:04 +0000 UTC,ContainerID:cri-o://0aa4079cc8e82b0ab60c959d00606be60e762e8448f079e43a92d64f3ddfe888,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:4daa41ceba8a4419da18efbd06db36756836d3bcb44d3455e26b67aaadfe109c registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel8@sha256:3610368309f43ef27867681ff7fbc6cfe8832e9ef0d66d86a98648530194eab2 cri-o://0aa4079cc8e82b0ab60c959d00606be60e762e8448f079e43a92d64f3ddfe888 } {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-29 06:46:05 +0000 UTC,FinishedAt:2023-05-29 06:46:05 +0000 UTC,ContainerID:cri-o://749f2b073b587c31f7842631533369d976aacc6640c3c5134e39fee0343ade7c,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel8@sha256:1d82ed7d073d62d85e3379d0c4f707cedcf0699abb7e99c10daad710702b86ae cri-o://749f2b073b587c31f7842631533369d976aacc6640c3c5134e39fee0343ade7c }] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel8@sha256:14830f5f09a590333c35ab5db6a3d2799c9e5904c5b81bd3bd78d587682b2d84 0xc00059957f}] Burstable []} 2023/05/29 06:46:11 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Installing application for case todolist-backup @ 05/29/23 06:46:11.564 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check namespace todolist-mariadb] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Create namespace todolist-mariadb] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Ensure namespace todolist-mariadb is present] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Deploy todolist-mysql application] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check mysql pod status (50 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (49 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (48 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (47 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (46 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check todolist pod status (50 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod status] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=14  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 STEP: Verify Application deployment @ 05/29/23 06:46:41.065 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=14  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:46:50.783 2023/05/29 06:46:50 Checking for correct number of running Restic pods... STEP: Creating backup todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 @ 05/29/23 06:46:50.807 2023/05/29 06:46:50 Wait until backup todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: InProgress backup phase: Completed STEP: Verify backup todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 06:47:50.904 2023/05/29 06:47:50 Backup for case todolist-backup succeeded STEP: Scale application @ 05/29/23 06:47:50.943 2023/05/29 06:47:50 Scaling deployment 'todolist' to 2 replicas 2023/05/29 06:47:50 Deployment updated successfully 2023/05/29 06:47:50 number of running pods: 1 2023/05/29 06:47:55 number of running pods: 1 2023/05/29 06:48:00 number of running pods: 1 2023/05/29 06:48:05 number of running pods: 1 2023/05/29 06:48:10 number of running pods: 1 2023/05/29 06:48:16 Application reached target number of replicas: 2 STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:48:16.001 2023/05/29 06:48:16 Checking for correct number of running Restic pods... STEP: Creating backup todolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29 @ 05/29/23 06:48:16.011 2023/05/29 06:48:16 Wait until backup todolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: Completed STEP: Verify backup todolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 06:48:56.058 2023/05/29 06:48:56 Backup for case todolist-backup succeeded STEP: Cleanup application and restore 1st backup @ 05/29/23 06:48:56.109 STEP: Delete the appplication resources todolist-backup @ 05/29/23 06:48:56.109 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=10  changed=4  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2023/05/29 06:49:10 Creating restore todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 for case todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 STEP: Create restore todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 from backup todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 @ 05/29/23 06:49:10.496 2023/05/29 06:49:10 Wait until restore todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 is complete restore phase: InProgress restore phase: PartiallyFailed STEP: Verify restore todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29has completed successfully @ 05/29/23 06:49:30.542 2023/05/29 06:49:30 { "metadata": { "name": "todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29", "namespace": "openshift-adp", "uid": "098cb2e2-b9e9-40d5-9133-c8afc837e4c7", "resourceVersion": "40781", "generation": 7, "creationTimestamp": "2023-05-29T06:49:10Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "velero.io/v1", "time": "2023-05-29T06:49:10Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupName": {}, "f:hooks": {} }, "f:status": {} } }, { "manager": "velero-server", "operation": "Update", "apiVersion": "velero.io/v1", "time": "2023-05-29T06:49:28Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:excludedResources": {} }, "f:status": { "f:completionTimestamp": {}, "f:errors": {}, "f:phase": {}, "f:progress": { ".": {}, "f:itemsRestored": {}, "f:totalItems": {} }, "f:startTimestamp": {}, "f:warnings": {} } } } ] }, "spec": { "backupName": "todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29", "excludedResources": [ "nodes", "events", "events.events.k8s.io", "backups.velero.io", "restores.velero.io", "resticrepositories.velero.io", "csinodes.storage.k8s.io", "volumeattachments.storage.k8s.io" ], "hooks": {} }, "status": { "phase": "PartiallyFailed", "warnings": 4, "errors": 1, "startTimestamp": "2023-05-29T06:49:10Z", "completionTimestamp": "2023-05-29T06:49:28Z", "progress": { "totalItems": 41, "itemsRestored": 41 } } } 2023/05/29 06:49:30 NAME READY STATUS RESTARTS AGE openshift-adp-controller-manager-75fdf665b6-z4wxd 1/1 Running 0 12m restic-7c2jk 1/1 Running 0 3m29s restic-8lnzj 1/1 Running 0 3m29s restic-x67tf 1/1 Running 0 3m29s velero-69c67df877-gqsgz 1/1 Running 0 3m29s [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_case.go:113 @ 05/29/23 06:49:30.67 STEP: Get the failed spec name @ 05/29/23 06:49:30.67 2023/05/29 06:49:30 The failed spec name is: Incremental backup restore tests Incremental restore pod count [test-upstream] Todolist app with Restic - policy: none STEP: Create a folder for all must-gather files if it doesn't exists already @ 05/29/23 06:49:30.67 2023/05/29 06:49:30 The folder logs does not exists, creating new folder with the name: logs STEP: Create a folder for the failed spec if it doesn't exists already @ 05/29/23 06:49:30.67 2023/05/29 06:49:30 The folder logs/[It]_Incremental_backup_restore_tests_Incremental_restore_pod_count_[test-upstream]_Todolist_app_with_Restic_-_policy_none does not exists, creating new folder with the name: logs/[It]_Incremental_backup_restore_tests_Incremental_restore_pod_count_[test-upstream]_Todolist_app_with_Restic_-_policy_none STEP: Run must-gather because the spec failed @ 05/29/23 06:49:30.67 2023/05/29 06:49:30 [adm must-gather --dest-dir logs/[It]_Incremental_backup_restore_tests_Incremental_restore_pod_count_[test-upstream]_Todolist_app_with_Restic_-_policy_none --image registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4] STEP: Find must-gather folder and rename it to a shorter more readable name @ 05/29/23 06:49:48.464 2023/05/29 06:49:48 Cleaning setup resources for the backup 2023/05/29 06:49:48 Cleaning setup resources for the backup 2023/05/29 06:49:48 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=10  changed=4  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 • [FAILED] [246.309 seconds] Incremental backup restore tests Incremental restore pod count [It] [test-upstream] Todolist app with Restic - policy: none /alabama/cspi/e2e/incremental_restore/backup_restore_incremental.go:97 [FAILED] Unexpected error: <*errors.errorString | 0xc0009182e0>: { s: "restore phase is: PartiallyFailed; expected: Completed\nfailure reason: \nvalidation errors: []\nvelero failure logs: [velero container contains \"level=error\" in line#112: time=\"2023-05-29T06:46:22Z\" level=error msg=\"Current BackupStorageLocations available/unavailable/unknown: 0/0/1)\" controller=backup-storage-location logSource=\"/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:173\"\n velero container contains \"level=error\" in line#2521: time=\"2023-05-29T06:49:27Z\" level=error msg=\"error restoring mysql-6997b444b6-snn7n: pods \\\"mysql-6997b444b6-snn7n\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\"mysql\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), runAsNonRoot != true (pod or containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\"RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/restore/restore.go:1388\" restore=openshift-adp/todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29\n velero container contains \"level=error\" in line#2695: time=\"2023-05-29T06:49:28Z\" level=error msg=\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-6997b444b6-snn7n: pods \\\"mysql-6997b444b6-snn7n\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\"mysql\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), runAsNonRoot != true (pod or containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \\\"restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\"RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\" restore=openshift-adp/todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29\n]", } restore phase is: PartiallyFailed; expected: Completed failure reason: validation errors: [] velero failure logs: [velero container contains "level=error" in line#112: time="2023-05-29T06:46:22Z" level=error msg="Current BackupStorageLocations available/unavailable/unknown: 0/0/1)" controller=backup-storage-location logSource="/remote-source/velero/app/pkg/controller/backup_storage_location_controller.go:173" velero container contains "level=error" in line#2521: time="2023-05-29T06:49:27Z" level=error msg="error restoring mysql-6997b444b6-snn7n: pods \"mysql-6997b444b6-snn7n\" is forbidden: violates PodSecurity \"restricted:v1.24\": privileged (container \"mysql\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \"restic-wait\", \"mysql\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"restic-wait\", \"mysql\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"restic-wait\", \"mysql\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \"restic-wait\", \"mysql\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" logSource="/remote-source/velero/app/pkg/restore/restore.go:1388" restore=openshift-adp/todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 velero container contains "level=error" in line#2695: time="2023-05-29T06:49:28Z" level=error msg="Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-6997b444b6-snn7n: pods \"mysql-6997b444b6-snn7n\" is forbidden: violates PodSecurity \"restricted:v1.24\": privileged (container \"mysql\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \"restic-wait\", \"mysql\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"restic-wait\", \"mysql\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"restic-wait\", \"mysql\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers \"restic-wait\", \"mysql\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" logSource="/remote-source/velero/app/pkg/controller/restore_controller.go:510" restore=openshift-adp/todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29 ] occurred In [It] at: /alabama/cspi/test_common/backup_restore_case.go:113 @ 05/29/23 06:49:30.67 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ Backup restore tests Application backup [tc-id:OADP-198][test-upstream][smoke] Different labels selector: Backup and Restore with multiple matched labels [orLabelSelectors] [labels] /alabama/cspi/e2e/app_backup/backup_restore_labels.go:42 2023/05/29 06:50:07 Delete all downloadrequest mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-0a421dc8-fff9-4a44-9cd2-9103a3187974mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-26b79cf5-5b66-48b0-aabc-72793beddf71mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-5a308e9d-ead7-4f69-bcb8-5d675cb93f08mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-720fc054-1a43-47d8-a9d5-c5498a8cd862todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-2b926ec2-6393-4735-b30a-5a7f2bb498adtodolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-851f5c01-3e30-41c4-8597-18fb01c8470ctodolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-a82660c1-5e4d-490d-aab1-dd008ce9ef95todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-f516ae81-4e2e-4733-9542-aa56f3bfa64ctodolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29-54c99030-c7aa-4325-8ba9-bc4a0dea3ceatodolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29-f2a568ff-324d-4922-aca5-c244501b230d STEP: Create DPA CR @ 05/29/23 06:50:07.99 2023/05/29 06:50:08 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f181ef4b-9b6c-4705-ae93-ccb672c47bb5", "resourceVersion": "41405", "generation": 1, "creationTimestamp": "2023-05-29T06:50:07Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:50:07Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:50:08.009 2023/05/29 06:50:08 Waiting for velero pod to be running 2023/05/29 06:50:08 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 06:50:08 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f181ef4b-9b6c-4705-ae93-ccb672c47bb5", "resourceVersion": "41405", "generation": 1, "creationTimestamp": "2023-05-29T06:50:07Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:50:07Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:50:13.03 2023/05/29 06:50:13 Snapclass 'example-snapclass' doesn't exist, creating 2023/05/29 06:50:13 Setting new default StorageClass 'gp2-csi' STEP: Installing application for case mysql198 @ 05/29/23 06:50:13.072 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace mysql-persistent] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (60 retries left). FAILED - RETRYING: [localhost]: Check pod status (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=15  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 STEP: Verify Application deployment @ 05/29/23 06:50:31.137 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 STEP: Creating backup mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 @ 05/29/23 06:50:35.207 2023/05/29 06:50:35 Wait until backup mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: Completed STEP: Verify backup mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 06:52:35.337 2023/05/29 06:52:35 Backup for case mysql198 succeeded STEP: Delete the appplication resources mysql198 @ 05/29/23 06:52:35.383 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:52:53 Creating restore mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 for case mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 STEP: Create restore mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 from backup mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 @ 05/29/23 06:52:53.828 2023/05/29 06:52:53 Wait until restore mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29 is complete restore phase: Completed STEP: Verify restore mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29has completed successfully @ 05/29/23 06:53:03.853 STEP: Verify Application restore @ 05/29/23 06:53:03.856 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] FAILED - RETRYING: [localhost]: Check mysql pod status (60 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2023/05/29 06:53:13 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:53:32 Cleaning setup resources for the backup 2023/05/29 06:53:32 Setting new default StorageClass 'gp3-csi' 2023/05/29 06:53:32 Deleting VolumeSnapshotClass 'example-snapclass' • [204.203 seconds] ------------------------------ S ------------------------------ Backup restore tests Application backup [tc-id:OADP-200][test-upstream] Different labels selector: Backup and Restore with multiple matched multiple labels under (matchLabels) [labels] /alabama/cspi/e2e/app_backup/backup_restore_labels.go:75 2023/05/29 06:53:32 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:53:32.048 2023/05/29 06:53:32 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d8aad1ed-38c9-4817-8947-6ffe7741f302", "resourceVersion": "43920", "generation": 1, "creationTimestamp": "2023-05-29T06:53:32Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:53:32Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:53:32.074 2023/05/29 06:53:32 Waiting for velero pod to be running 2023/05/29 06:53:32 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 06:53:32 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d8aad1ed-38c9-4817-8947-6ffe7741f302", "resourceVersion": "43920", "generation": 1, "creationTimestamp": "2023-05-29T06:53:32Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:53:32Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:53:37.116 2023/05/29 06:53:37 Snapclass 'example-snapclass' doesn't exist, creating 2023/05/29 06:53:37 Setting new default StorageClass 'gp2-csi' STEP: Installing application for case mysql @ 05/29/23 06:53:37.151 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace mysql-persistent] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (60 retries left). FAILED - RETRYING: [localhost]: Check pod status (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=15  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 STEP: Verify Application deployment @ 05/29/23 06:53:59.482 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 STEP: Creating backup mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 @ 05/29/23 06:54:03.604 2023/05/29 06:54:03 Wait until backup mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: Completed STEP: Verify backup mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 06:55:43.704 2023/05/29 06:55:43 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 05/29/23 06:55:43.746 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:56:02 Creating restore mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 for case mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 STEP: Create restore mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 from backup mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 @ 05/29/23 06:56:02.688 2023/05/29 06:56:02 Wait until restore mysql-864a0a60-fded-11ed-8b0b-0a580a815c29 is complete restore phase: Completed STEP: Verify restore mysql-864a0a60-fded-11ed-8b0b-0a580a815c29has completed successfully @ 05/29/23 06:56:12.711 STEP: Verify Application restore @ 05/29/23 06:56:12.715 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2023/05/29 06:56:17 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 06:56:36 Cleaning setup resources for the backup 2023/05/29 06:56:36 Setting new default StorageClass 'gp3-csi' 2023/05/29 06:56:36 Deleting VolumeSnapshotClass 'example-snapclass' • [184.508 seconds] ------------------------------ S ------------------------------ Backup restore tests Application backup [tc-id:OADP-210][test-upstream] Different labels selector: verify that labelSelector and orLabelSelectors cannot co-exist [labels] /alabama/cspi/e2e/app_backup/backup_restore_labels.go:206 2023/05/29 06:56:36 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:56:36.558 2023/05/29 06:56:36 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f60d6f3b-1af8-4283-afc3-8967d4734e97", "resourceVersion": "46296", "generation": 1, "creationTimestamp": "2023-05-29T06:56:36Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:56:36Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:56:36.6 2023/05/29 06:56:36 Waiting for velero pod to be running 2023/05/29 06:56:36 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 06:56:36 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f60d6f3b-1af8-4283-afc3-8967d4734e97", "resourceVersion": "46296", "generation": 1, "creationTimestamp": "2023-05-29T06:56:36Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:56:36Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:56:41.629 2023/05/29 06:56:41 Snapclass 'example-snapclass' doesn't exist, creating 2023/05/29 06:56:41 Setting new default StorageClass 'gp2-csi' STEP: Creating backup mysql-f443c335-fded-11ed-8b0b-0a580a815c29 @ 05/29/23 06:56:41.659 2023/05/29 06:56:41 Wait until backup mysql-f443c335-fded-11ed-8b0b-0a580a815c29 is completed backup phase: FailedValidation STEP: Verify backup mysql-f443c335-fded-11ed-8b0b-0a580a815c29 has completed with validation error @ 05/29/23 06:57:21.701 2023/05/29 06:57:21 Backup for case mysql completed with validation error as expected STEP: Verify backup failed with the expected validation error message @ 05/29/23 06:57:21.715 2023/05/29 06:57:21 Cleaning setup resources for the backup 2023/05/29 06:57:21 Setting new default StorageClass 'gp3-csi' 2023/05/29 06:57:21 Deleting VolumeSnapshotClass 'example-snapclass' • [45.205 seconds] ------------------------------ SSS ------------------------------ Backup restore tests Application backup [test-upstream] MySQL application with Restic [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:48 2023/05/29 06:57:21 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 06:57:21.763 2023/05/29 06:57:21 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "2307d782-57d1-4a4f-8e5b-5050e76a6651", "resourceVersion": "46851", "generation": 1, "creationTimestamp": "2023-05-29T06:57:21Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:57:21Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ] }, "restic": { "enable": true, "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 06:57:21.798 2023/05/29 06:57:21 Waiting for velero pod to be running 2023/05/29 06:57:21 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 06:57:21 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "2307d782-57d1-4a4f-8e5b-5050e76a6651", "resourceVersion": "46859", "generation": 1, "creationTimestamp": "2023-05-29T06:57:21Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:57:21Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } }, { "manager": "manager", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T06:57:21Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { ".": {}, "f:conditions": {} } }, "subresource": "status" } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ] }, "restic": { "enable": true, "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": { "conditions": [ { "type": "Reconciled", "status": "False", "lastTransitionTime": "2023-05-29T06:57:21Z", "reason": "Error", "message": "configmaps \"restic-restore-action-config\" not found" } ] } } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 06:57:26.832 2023/05/29 06:57:26 Checking for correct number of running Restic pods... STEP: Installing application for case mysql @ 05/29/23 06:57:26.843 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace mysql-persistent] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (60 retries left). FAILED - RETRYING: [localhost]: Check pod status (59 retries left). FAILED - RETRYING: [localhost]: Check pod status (58 retries left). FAILED - RETRYING: [localhost]: Check pod status (57 retries left). FAILED - RETRYING: [localhost]: Check pod status (56 retries left). FAILED - RETRYING: [localhost]: Check pod status (55 retries left). FAILED - RETRYING: [localhost]: Check pod status (54 retries left). FAILED - RETRYING: [localhost]: Check pod status (53 retries left). FAILED - RETRYING: [localhost]: Check pod status (52 retries left). FAILED - RETRYING: [localhost]: Check pod status (51 retries left). FAILED - RETRYING: [localhost]: Check pod status (50 retries left). FAILED - RETRYING: [localhost]: Check pod status (49 retries left). FAILED - RETRYING: [localhost]: Check pod status (48 retries left). FAILED - RETRYING: [localhost]: Check pod status (47 retries left). FAILED - RETRYING: [localhost]: Check pod status (46 retries left). FAILED - RETRYING: [localhost]: Check pod status (45 retries left). FAILED - RETRYING: [localhost]: Check pod status (44 retries left). FAILED - RETRYING: [localhost]: Check pod status (43 retries left). FAILED - RETRYING: [localhost]: Check pod status (42 retries left). FAILED - RETRYING: [localhost]: Check pod status (41 retries left). FAILED - RETRYING: [localhost]: Check pod status (40 retries left). FAILED - RETRYING: [localhost]: Check pod status (39 retries left). FAILED - RETRYING: [localhost]: Check pod status (38 retries left). FAILED - RETRYING: [localhost]: Check pod status (37 retries left). FAILED - RETRYING: [localhost]: Check pod status (36 retries left). FAILED - RETRYING: [localhost]: Check pod status (35 retries left). FAILED - RETRYING: [localhost]: Check pod status (34 retries left). FAILED - RETRYING: [localhost]: Check pod status (33 retries left). FAILED - RETRYING: [localhost]: Check pod status (32 retries left). FAILED - RETRYING: [localhost]: Check pod status (31 retries left). FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). FAILED - RETRYING: [localhost]: Check pod status (28 retries left). FAILED - RETRYING: [localhost]: Check pod status (27 retries left). FAILED - RETRYING: [localhost]: Check pod status (26 retries left). FAILED - RETRYING: [localhost]: Check pod status (25 retries left). FAILED - RETRYING: [localhost]: Check pod status (24 retries left). FAILED - RETRYING: [localhost]: Check pod status (23 retries left). FAILED - RETRYING: [localhost]: Check pod status (22 retries left). FAILED - RETRYING: [localhost]: Check pod status (21 retries left). FAILED - RETRYING: [localhost]: Check pod status (20 retries left). FAILED - RETRYING: [localhost]: Check pod status (19 retries left). FAILED - RETRYING: [localhost]: Check pod status (18 retries left). FAILED - RETRYING: [localhost]: Check pod status (17 retries left). FAILED - RETRYING: [localhost]: Check pod status (16 retries left). FAILED - RETRYING: [localhost]: Check pod status (15 retries left). FAILED - RETRYING: [localhost]: Check pod status (14 retries left). FAILED - RETRYING: [localhost]: Check pod status (13 retries left). FAILED - RETRYING: [localhost]: Check pod status (12 retries left). FAILED - RETRYING: [localhost]: Check pod status (11 retries left). FAILED - RETRYING: [localhost]: Check pod status (10 retries left). FAILED - RETRYING: [localhost]: Check pod status (9 retries left). FAILED - RETRYING: [localhost]: Check pod status (8 retries left). FAILED - RETRYING: [localhost]: Check pod status (7 retries left). FAILED - RETRYING: [localhost]: Check pod status (6 retries left). FAILED - RETRYING: [localhost]: Check pod status (5 retries left). FAILED - RETRYING: [localhost]: Check pod status (4 retries left). FAILED - RETRYING: [localhost]: Check pod status (3 retries left). FAILED - RETRYING: [localhost]: Check pod status (2 retries left). FAILED - RETRYING: [localhost]: Check pod status (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** fatal: [localhost]: FAILED! => {"api_found": true, "attempts": 60, "changed": false, "resources": []} PLAY RECAP ********************************************************************* localhost : ok=11  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:28 @ 05/29/23 07:03:06.366 STEP: Get the failed spec name @ 05/29/23 07:03:06.366 2023/05/29 07:03:06 The failed spec name is: Backup restore tests Application backup [test-upstream] MySQL application with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 05/29/23 07:03:06.367 STEP: Create a folder for the failed spec if it doesn't exists already @ 05/29/23 07:03:06.367 2023/05/29 07:03:06 The folder logs/[It]_Backup_restore_tests_Application_backup_[test-upstream]_MySQL_application_with_Restic_[mr-check] does not exists, creating new folder with the name: logs/[It]_Backup_restore_tests_Application_backup_[test-upstream]_MySQL_application_with_Restic_[mr-check] STEP: Run must-gather because the spec failed @ 05/29/23 07:03:06.367 2023/05/29 07:03:06 [adm must-gather --dest-dir logs/[It]_Backup_restore_tests_Application_backup_[test-upstream]_MySQL_application_with_Restic_[mr-check] --image registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4] STEP: Find must-gather folder and rename it to a shorter more readable name @ 05/29/23 07:03:16.809 2023/05/29 07:03:16 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace mysql-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2023/05/29 07:03:35 Cleaning setup resources for the backup • [FAILED] [373.594 seconds] Backup restore tests Application backup [It] [test-upstream] MySQL application with Restic [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:48 [FAILED] Unexpected error: <*errors.Error | 0xc00018c040>: { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"namespace\":\"mysql-persistent\",\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"namespace":"mysql-persistent","use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:28 @ 05/29/23 07:03:06.366 ------------------------------ S ------------------------------ Backup restore tests Application backup [test-upstream] Django application with BSL&CSI [exclude_aro-4] /alabama/cspi/e2e/app_backup/backup_restore.go:78 2023/05/29 07:03:35 Delete all downloadrequest mysql-864a0a60-fded-11ed-8b0b-0a580a815c29-6805c7f8-2631-4049-b35d-c32975d00ea9mysql-864a0a60-fded-11ed-8b0b-0a580a815c29-a535f0e1-a8ec-40e1-bf30-5fad3ad1e5c8mysql-864a0a60-fded-11ed-8b0b-0a580a815c29-ab23c4de-8626-49b0-befa-2fa4a55ca425mysql-864a0a60-fded-11ed-8b0b-0a580a815c29-e83ff1a9-cdde-45f2-9e1e-af18e66188e7mysql-f443c335-fded-11ed-8b0b-0a580a815c29-19355c30-99ce-4b6f-9a77-3ef328ba5afbmysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-5c4bde66-c11e-427f-839a-14c3b51fc894mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-714f8d24-b9cf-4a52-b44e-9e65b0d37ac3mysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-76312394-80d3-46eb-844c-ea83d43ee90cmysql-fd5d94ac-fdeb-11ed-8b0b-0a580a815c29-d8bd4c06-7964-4519-8d81-5709e4ffcb4amysql198-0c92d322-fded-11ed-8b0b-0a580a815c29-0e92ba4a-8fea-442f-9ee0-a040ea936771mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29-431e6630-2146-4245-8657-c09cab8922f8mysql198-0c92d322-fded-11ed-8b0b-0a580a815c29-7b6b4996-432d-4504-a230-9a3de56cb35cmysql198-0c92d322-fded-11ed-8b0b-0a580a815c29-d80efe57-b39e-42c3-b78b-b9f1bfdaff3ctodolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-9863c9cc-acdc-4219-a4ca-9a2604b8b098todolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-a214b001-600b-498f-b234-b215814d071btodolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-aec43d1d-3d80-4f3c-9af7-a1d8e4583faetodolist-backup-971ea8dd-fdec-11ed-8b0b-0a580a815c29-ce302808-5b97-4772-8166-9241397c8197todolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29-094e0861-ac92-4080-bf33-1ce0e862aa68todolist-backup-c9e9f53a-fdec-11ed-8b0b-0a580a815c29-e2d2f2ac-cc4e-41d3-b8e0-51b4ec71d44a STEP: Create DPA CR @ 05/29/23 07:03:35.548 2023/05/29 07:03:35 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d81f7b28-b2d9-4873-b448-db109441dae2", "resourceVersion": "50798", "generation": 1, "creationTimestamp": "2023-05-29T07:03:35Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T07:03:35Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 07:03:35.558 2023/05/29 07:03:35 Waiting for velero pod to be running 2023/05/29 07:03:35 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 07:03:35 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d81f7b28-b2d9-4873-b448-db109441dae2", "resourceVersion": "50798", "generation": 1, "creationTimestamp": "2023-05-29T07:03:35Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T07:03:35Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ] }, "restic": { "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 07:03:40.575 2023/05/29 07:03:40 Snapclass 'example-snapclass' doesn't exist, creating 2023/05/29 07:03:40 Setting new default StorageClass 'gp2-csi' STEP: Installing application for case django-persistent @ 05/29/23 07:03:40.604 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check namespace django-persistent] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create namespace django-persistent] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create the mtc test django psql persistent template] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create openshift django psql persistent application from openshift templates] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=12  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 STEP: Verify Application deployment @ 05/29/23 07:03:45.484 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] FAILED - RETRYING: [localhost]: Check postgresql pod status (60 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (59 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (58 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (57 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (56 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check application pod status (60 retries left). FAILED - RETRYING: [localhost]: Check application pod status (59 retries left). FAILED - RETRYING: [localhost]: Check application pod status (58 retries left). FAILED - RETRYING: [localhost]: Check application pod status (57 retries left). FAILED - RETRYING: [localhost]: Check application pod status (56 retries left). FAILED - RETRYING: [localhost]: Check application pod status (55 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** ok: [localhost] => {  "msg": "PASS: # of visits should be 1; actual 1" } PLAY RECAP ********************************************************************* localhost : ok=15  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 STEP: Creating backup django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 @ 05/29/23 07:04:52.327 2023/05/29 07:04:52 Wait until backup django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 is completed backup phase: InProgress backup phase: InProgress backup phase: InProgress backup phase: Completed STEP: Verify backup django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 has completed successfully @ 05/29/23 07:06:12.408 2023/05/29 07:06:12 Backup for case django-persistent succeeded STEP: Delete the appplication resources django-persistent @ 05/29/23 07:06:12.447 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace django-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2023/05/29 07:06:31 Creating restore django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 for case django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 STEP: Create restore django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 from backup django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 @ 05/29/23 07:06:31.04 2023/05/29 07:06:31 Wait until restore django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29 is complete restore phase: InProgress restore phase: Completed STEP: Verify restore django-persistent-ede2fbb9-fdee-11ed-8b0b-0a580a815c29has completed successfully @ 05/29/23 07:06:51.067 STEP: Verify Application restore @ 05/29/23 07:06:51.07 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] FAILED - RETRYING: [localhost]: Check postgresql pod status (60 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (59 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check application pod status (60 retries left). FAILED - RETRYING: [localhost]: Check application pod status (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** ok: [localhost] FAILED - RETRYING: [localhost]: Get num visits up to now (20 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** ok: [localhost] => {  "msg": "PASS: # of visits should be 2; actual 2" } PLAY RECAP ********************************************************************* localhost : ok=15  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2023/05/29 07:07:35 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace django-persistent] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2023/05/29 07:07:53 Cleaning setup resources for the backup 2023/05/29 07:07:53 Setting new default StorageClass 'gp3-csi' 2023/05/29 07:07:53 Deleting VolumeSnapshotClass 'example-snapclass' • [258.299 seconds] ------------------------------ SSSSSSSS ------------------------------ Backup hooks tests Pre exec hook [tc_id:OADP-92][test-upstream] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 2023/05/29 07:07:53 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 05/29/23 07:07:53.651 2023/05/29 07:07:53 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "47312bf8-9e84-4315-b193-2e121409941b", "resourceVersion": "54302", "generation": 1, "creationTimestamp": "2023-05-29T07:07:53Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T07:07:53Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ] }, "restic": { "enable": true, "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": {} } STEP: Verify DPA CR setup @ 05/29/23 07:07:53.672 2023/05/29 07:07:53 Waiting for velero pod to be running 2023/05/29 07:07:53 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2023/05/29 07:07:53 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "47312bf8-9e84-4315-b193-2e121409941b", "resourceVersion": "54310", "generation": 1, "creationTimestamp": "2023-05-29T07:07:53Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T07:07:53Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:restic": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} } }, "f:velero": { ".": {}, "f:defaultPlugins": {} } }, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } }, { "manager": "manager", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2023-05-29T07:07:53Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { ".": {}, "f:conditions": {} } }, "subresource": "status" } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-kgytzj8j-interopoadp", "prefix": "velero-e2e-ee6668b2-fdeb-11ed-8b0b-0a580a815c29" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ] }, "restic": { "enable": true, "podConfig": { "resourceAllocations": {} } } }, "features": null }, "status": { "conditions": [ { "type": "Reconciled", "status": "False", "lastTransitionTime": "2023-05-29T07:07:53Z", "reason": "Error", "message": "configmaps \"restic-restore-action-config\" not found" } ] } } STEP: Prepare backup resources, depending on the volumes backup type @ 05/29/23 07:07:58.704 2023/05/29 07:07:58 Checking for correct number of running Restic pods... STEP: Installing application for case cassandra-hooks-e2e @ 05/29/23 07:07:58.712 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). FAILED - RETRYING: [localhost]: Check pods status (28 retries left). FAILED - RETRYING: [localhost]: Check pods status (27 retries left). FAILED - RETRYING: [localhost]: Check pods status (26 retries left). FAILED - RETRYING: [localhost]: Check pods status (25 retries left). FAILED - RETRYING: [localhost]: Check pods status (24 retries left). FAILED - RETRYING: [localhost]: Check pods status (23 retries left). FAILED - RETRYING: [localhost]: Check pods status (22 retries left). FAILED - RETRYING: [localhost]: Check pods status (21 retries left). FAILED - RETRYING: [localhost]: Check pods status (20 retries left). FAILED - RETRYING: [localhost]: Check pods status (19 retries left). FAILED - RETRYING: [localhost]: Check pods status (18 retries left). FAILED - RETRYING: [localhost]: Check pods status (17 retries left). FAILED - RETRYING: [localhost]: Check pods status (16 retries left). FAILED - RETRYING: [localhost]: Check pods status (15 retries left). FAILED - RETRYING: [localhost]: Check pods status (14 retries left). FAILED - RETRYING: [localhost]: Check pods status (13 retries left). FAILED - RETRYING: [localhost]: Check pods status (12 retries left). FAILED - RETRYING: [localhost]: Check pods status (11 retries left). FAILED - RETRYING: [localhost]: Check pods status (10 retries left). FAILED - RETRYING: [localhost]: Check pods status (9 retries left). FAILED - RETRYING: [localhost]: Check pods status (8 retries left). FAILED - RETRYING: [localhost]: Check pods status (7 retries left). FAILED - RETRYING: [localhost]: Check pods status (6 retries left). FAILED - RETRYING: [localhost]: Check pods status (5 retries left). FAILED - RETRYING: [localhost]: Check pods status (4 retries left). FAILED - RETRYING: [localhost]: Check pods status (3 retries left). FAILED - RETRYING: [localhost]: Check pods status (2 retries left). FAILED - RETRYING: [localhost]: Check pods status (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** fatal: [localhost]: FAILED! => {"api_found": true, "attempts": 30, "changed": false, "resources": []} PLAY RECAP ********************************************************************* localhost : ok=13  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:28 @ 05/29/23 07:10:50.434 STEP: Get the failed spec name @ 05/29/23 07:10:50.434 2023/05/29 07:10:50 The failed spec name is: Backup hooks tests Pre exec hook [tc_id:OADP-92][test-upstream] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 05/29/23 07:10:50.434 STEP: Create a folder for the failed spec if it doesn't exists already @ 05/29/23 07:10:50.434 2023/05/29 07:10:50 The folder logs/[It]_Backup_hooks_tests_Pre_exec_hook_[tc_id_OADP-92][test-upstream]_Cassandra_app_with_Restic does not exists, creating new folder with the name: logs/[It]_Backup_hooks_tests_Pre_exec_hook_[tc_id_OADP-92][test-upstream]_Cassandra_app_with_Restic STEP: Run must-gather because the spec failed @ 05/29/23 07:10:50.434 2023/05/29 07:10:50 [adm must-gather --dest-dir logs/[It]_Backup_hooks_tests_Pre_exec_hook_[tc_id_OADP-92][test-upstream]_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel8:1.1.4] STEP: Find must-gather folder and rename it to a shorter more readable name @ 05/29/23 07:11:01.584 2023/05/29 07:11:01 Cleaning app [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [Get cluster endpoint] **************************************************** changed: [localhost] TASK [Get current admin token] ************************************************* changed: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] TASK [Extract kubernetes minor version from cluster_info] ********************** ok: [localhost] TASK [Map kubernetes minor to ocp releases] ************************************ ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace cassandra-ns] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=9  changed=3  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2023/05/29 07:11:20 Cleaning setup resources for the backup • [FAILED] [207.221 seconds] Backup hooks tests Pre exec hook [It] [tc_id:OADP-92][test-upstream] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 [FAILED] Unexpected error: <*errors.Error | 0xc000824040>: { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"namespace\":\"cassandra-ns\",\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"namespace":"cassandra-ns","use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:28 @ 05/29/23 07:10:50.434 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS P [PENDING] Subscription Config Suite Test Subscription Config Suite Test Proxy test table HTTP_PROXY set proxy.example.com /alabama/cspi/e2e/subscription/proxy_config.go:118 ------------------------------ SSSSS ------------------------------ [AfterSuite]  /alabama/cspi/e2e/e2e_suite_test.go:120 2023/05/29 07:11:20 Deleting Velero CR [AfterSuite] PASSED [0.005 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo [ReportAfterSuite] PASSED [0.007 seconds] ------------------------------ Summarizing 3 Failures: [FAIL] Incremental backup restore tests Incremental restore pod count [It] [test-upstream] Todolist app with Restic - policy: none /alabama/cspi/test_common/backup_restore_case.go:113 [FAIL] Backup restore tests Application backup [It] [test-upstream] MySQL application with Restic [mr-check] /alabama/cspi/test_common/backup_restore_app_case.go:28 [FAIL] Backup hooks tests Pre exec hook [It] [tc_id:OADP-92][test-upstream] Cassandra app with Restic /alabama/cspi/test_common/backup_restore_app_case.go:28 Ran 8 of 94 Specs in 1753.154 seconds FAIL! -- 5 Passed | 3 Failed | 1 Pending | 85 Skipped --- FAIL: TestOADPE2E (1753.16s) FAIL Ginkgo ran 1 suite in 29m40.481768218s Test Suite Failed Copying /alabama/cspi/e2e/junit_report.xml to /logs/artifacts/junit_oadp_interop_results.xml...