Extract /home/jenkins/oadp-e2e-qe.tar.gz to /alabama/cspi Extract /home/jenkins/oadp-apps-deployer.tar.gz to /alabama/oadpApps Extract /home/jenkins/mtc-python-client.tar.gz to /alabama/pyclient Create and populate /tmp/test-settings... Login as Kubeadmin to the test cluster at https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443... WARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 78 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". Create virtual environment and install required packages... Collecting ansible_runner Downloading ansible_runner-2.4.1-py3-none-any.whl.metadata (3.2 kB) Collecting pexpect>=4.5 (from ansible_runner) Downloading pexpect-4.9.0-py2.py3-none-any.whl.metadata (2.5 kB) Collecting packaging (from ansible_runner) Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB) Collecting python-daemon (from ansible_runner) Downloading python_daemon-3.1.2-py3-none-any.whl.metadata (4.8 kB) Collecting pyyaml (from ansible_runner) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB) Collecting ptyprocess>=0.5 (from pexpect>=4.5->ansible_runner) Downloading ptyprocess-0.7.0-py2.py3-none-any.whl.metadata (1.3 kB) Collecting lockfile>=0.10 (from python-daemon->ansible_runner) Downloading lockfile-0.12.2-py2.py3-none-any.whl.metadata (2.4 kB) Downloading ansible_runner-2.4.1-py3-none-any.whl (79 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.6/79.6 kB 2.6 MB/s eta 0:00:00 Downloading pexpect-4.9.0-py2.py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.8/63.8 kB 4.9 MB/s eta 0:00:00 Downloading packaging-25.0-py3-none-any.whl (66 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.5/66.5 kB 1.6 MB/s eta 0:00:00 Downloading python_daemon-3.1.2-py3-none-any.whl (30 kB) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (767 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 767.5/767.5 kB 15.5 MB/s eta 0:00:00 Downloading lockfile-0.12.2-py2.py3-none-any.whl (13 kB) Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Installing collected packages: ptyprocess, lockfile, pyyaml, python-daemon, pexpect, packaging, ansible_runner Successfully installed ansible_runner-2.4.1 lockfile-0.12.2 packaging-25.0 pexpect-4.9.0 ptyprocess-0.7.0 python-daemon-3.1.2 pyyaml-6.0.2 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip Processing /alabama/oadpApps Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Building wheels for collected packages: ocpdeployer Building wheel for ocpdeployer (pyproject.toml): started Building wheel for ocpdeployer (pyproject.toml): finished with status 'done' Created wheel for ocpdeployer: filename=ocpdeployer-0.0.1-py2.py3-none-any.whl size=235616 sha256=de2c3e612e0eae4682b4adae038b1acb288d914cebc1dd93abca8aa92d178f6f Stored in directory: /tmp/pip-ephem-wheel-cache-lnnevy3k/wheels/55/c3/15/eb89266a7928fafe53678a24892891bbfb18405fbd475eb4c6 Successfully built ocpdeployer Installing collected packages: ocpdeployer Successfully installed ocpdeployer-0.0.1 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip Processing /alabama/pyclient Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting suds-py3 (from mtc==0.0.1) Downloading suds_py3-1.4.5.0-py3-none-any.whl.metadata (778 bytes) Collecting requests (from mtc==0.0.1) Downloading requests-2.32.4-py3-none-any.whl.metadata (4.9 kB) Collecting jinja2 (from mtc==0.0.1) Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB) Collecting kubernetes==11.0.0 (from mtc==0.0.1) Downloading kubernetes-11.0.0-py3-none-any.whl.metadata (1.5 kB) Collecting openshift==0.11.2 (from mtc==0.0.1) Downloading openshift-0.11.2.tar.gz (19 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting certifi>=14.05.14 (from kubernetes==11.0.0->mtc==0.0.1) Downloading certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB) Collecting six>=1.9.0 (from kubernetes==11.0.0->mtc==0.0.1) Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Collecting python-dateutil>=2.5.3 (from kubernetes==11.0.0->mtc==0.0.1) Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting setuptools>=21.0.0 (from kubernetes==11.0.0->mtc==0.0.1) Using cached setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Requirement already satisfied: pyyaml>=3.12 in /alabama/venv/lib64/python3.12/site-packages (from kubernetes==11.0.0->mtc==0.0.1) (6.0.2) Collecting google-auth>=1.0.1 (from kubernetes==11.0.0->mtc==0.0.1) Downloading google_auth-2.40.3-py2.py3-none-any.whl.metadata (6.2 kB) Collecting websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 (from kubernetes==11.0.0->mtc==0.0.1) Downloading websocket_client-1.8.0-py3-none-any.whl.metadata (8.0 kB) Collecting requests-oauthlib (from kubernetes==11.0.0->mtc==0.0.1) Downloading requests_oauthlib-2.0.0-py2.py3-none-any.whl.metadata (11 kB) Collecting urllib3>=1.24.2 (from kubernetes==11.0.0->mtc==0.0.1) Downloading urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB) Collecting python-string-utils (from openshift==0.11.2->mtc==0.0.1) Downloading python_string_utils-1.0.0-py3-none-any.whl.metadata (12 kB) Collecting ruamel.yaml>=0.15 (from openshift==0.11.2->mtc==0.0.1) Downloading ruamel.yaml-0.18.14-py3-none-any.whl.metadata (24 kB) Collecting MarkupSafe>=2.0 (from jinja2->mtc==0.0.1) Downloading MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB) Collecting charset_normalizer<4,>=2 (from requests->mtc==0.0.1) Downloading charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (36 kB) Collecting idna<4,>=2.5 (from requests->mtc==0.0.1) Downloading idna-3.10-py3-none-any.whl.metadata (10 kB) Collecting cachetools<6.0,>=2.0.0 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading cachetools-5.5.2-py3-none-any.whl.metadata (5.4 kB) Collecting pyasn1-modules>=0.2.1 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading pyasn1_modules-0.4.2-py3-none-any.whl.metadata (3.5 kB) Collecting rsa<5,>=3.1.4 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading rsa-4.9.1-py3-none-any.whl.metadata (5.6 kB) Collecting ruamel.yaml.clib>=0.2.7 (from ruamel.yaml>=0.15->openshift==0.11.2->mtc==0.0.1) Downloading ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.7 kB) Collecting oauthlib>=3.0.0 (from requests-oauthlib->kubernetes==11.0.0->mtc==0.0.1) Downloading oauthlib-3.3.1-py3-none-any.whl.metadata (7.9 kB) Collecting pyasn1<0.7.0,>=0.6.1 (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB) Downloading kubernetes-11.0.0-py3-none-any.whl (1.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 11.6 MB/s eta 0:00:00 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.9/134.9 kB 1.2 MB/s eta 0:00:00 Downloading requests-2.32.4-py3-none-any.whl (64 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.8/64.8 kB 553.4 kB/s eta 0:00:00 Downloading suds_py3-1.4.5.0-py3-none-any.whl (298 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.8/298.8 kB 4.1 MB/s eta 0:00:00 Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.2/161.2 kB 4.6 MB/s eta 0:00:00 Downloading charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (151 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 151.8/151.8 kB 3.2 MB/s eta 0:00:00 Downloading google_auth-2.40.3-py2.py3-none-any.whl (216 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 216.1/216.1 kB 3.0 MB/s eta 0:00:00 Downloading idna-3.10-py3-none-any.whl (70 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 482.2 kB/s eta 0:00:00 Downloading MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB) Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 2.2 MB/s eta 0:00:00 Downloading ruamel.yaml-0.18.14-py3-none-any.whl (118 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.6/118.6 kB 1.1 MB/s eta 0:00:00 Using cached setuptools-80.9.0-py3-none-any.whl (1.2 MB) Downloading six-1.17.0-py2.py3-none-any.whl (11 kB) Downloading urllib3-2.5.0-py3-none-any.whl (129 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.8/129.8 kB 2.4 MB/s eta 0:00:00 Downloading websocket_client-1.8.0-py3-none-any.whl (58 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.8/58.8 kB 591.2 kB/s eta 0:00:00 Downloading python_string_utils-1.0.0-py3-none-any.whl (26 kB) Downloading requests_oauthlib-2.0.0-py2.py3-none-any.whl (24 kB) Downloading cachetools-5.5.2-py3-none-any.whl (10 kB) Downloading oauthlib-3.3.1-py3-none-any.whl (160 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 1.9 MB/s eta 0:00:00 Downloading pyasn1_modules-0.4.2-py3-none-any.whl (181 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.3/181.3 kB 1.7 MB/s eta 0:00:00 Downloading rsa-4.9.1-py3-none-any.whl (34 kB) Downloading ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (754 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 754.1/754.1 kB 6.1 MB/s eta 0:00:00 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 808.8 kB/s eta 0:00:00 Building wheels for collected packages: mtc, openshift Building wheel for mtc (pyproject.toml): started Building wheel for mtc (pyproject.toml): finished with status 'done' Created wheel for mtc: filename=mtc-0.0.1-py3-none-any.whl size=31146 sha256=b254d2d42b80bab158ecae7c59b5523c77bd2e27d83ecfba10138a64e809be4d Stored in directory: /tmp/pip-ephem-wheel-cache-zlrz7oeg/wheels/f1/2c/83/c09cb54cb0e821a8186cf5320758c27e7227ec862045210509 Building wheel for openshift (pyproject.toml): started Building wheel for openshift (pyproject.toml): finished with status 'done' Created wheel for openshift: filename=openshift-0.11.2-py3-none-any.whl size=19881 sha256=0767809ad5b5015bc61213901b8f174b1220c9a5fceb051be7a556397f5f0d26 Stored in directory: /alabama/.cache/pip/wheels/34/b7/02/4eb142942314b119c5fb3d4e595ac59486c1f3d79ff665397d Successfully built mtc openshift Installing collected packages: suds-py3, websocket-client, urllib3, six, setuptools, ruamel.yaml.clib, python-string-utils, pyasn1, oauthlib, MarkupSafe, idna, charset_normalizer, certifi, cachetools, ruamel.yaml, rsa, requests, python-dateutil, pyasn1-modules, jinja2, requests-oauthlib, google-auth, kubernetes, openshift, mtc Successfully installed MarkupSafe-3.0.2 cachetools-5.5.2 certifi-2025.8.3 charset_normalizer-3.4.3 google-auth-2.40.3 idna-3.10 jinja2-3.1.6 kubernetes-11.0.0 mtc-0.0.1 oauthlib-3.3.1 openshift-0.11.2 pyasn1-0.6.1 pyasn1-modules-0.4.2 python-dateutil-2.9.0.post0 python-string-utils-1.0.0 requests-2.32.4 requests-oauthlib-2.0.0 rsa-4.9.1 ruamel.yaml-0.18.14 ruamel.yaml.clib-0.2.12 setuptools-80.9.0 six-1.17.0 suds-py3-1.4.5.0 urllib3-2.5.0 websocket-client-1.8.0 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip go: downloading go1.24.1 (linux/amd64) go: downloading github.com/onsi/gomega v1.36.3 go: downloading github.com/vmware-tanzu/velero v1.16.0 go: downloading k8s.io/api v0.31.3 go: downloading github.com/onsi/ginkgo/v2 v2.23.4 go: downloading github.com/migtools/oadp-non-admin v0.0.0-20250409143544-08533a6c302d go: downloading k8s.io/apimachinery v0.31.3 go: downloading k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 go: downloading github.com/openshift/oadp-operator v1.0.2-0.20250530205020-5a814a098127 go: downloading sigs.k8s.io/controller-runtime v0.19.3 go: downloading github.com/operator-framework/api v0.14.1-0.20220413143725-33310d6154f3 go: downloading github.com/andygrunwald/go-jira v1.16.0 go: downloading k8s.io/client-go v0.31.3 go: downloading github.com/apenella/go-ansible v1.1.5 go: downloading github.com/aws/aws-sdk-go v1.44.253 go: downloading github.com/google/uuid v1.6.0 go: downloading github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0 go: downloading github.com/openshift/api v0.0.0-20230414143018-3367bc7e6ac7 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading k8s.io/kubectl v0.30.5 go: downloading github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3 go: downloading sigs.k8s.io/yaml v1.4.0 go: downloading github.com/google/go-cmp v0.7.0 go: downloading github.com/fatih/structs v1.1.0 go: downloading github.com/golang-jwt/jwt/v4 v4.5.0 go: downloading github.com/google/go-querystring v1.1.0 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/trivago/tgo v1.0.7 go: downloading github.com/evanphx/json-patch/v5 v5.9.0 go: downloading k8s.io/klog/v2 v2.130.1 go: downloading github.com/go-logr/logr v1.4.2 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/evanphx/json-patch v5.6.0+incompatible go: downloading github.com/google/gofuzz v1.2.0 go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace go: downloading github.com/sirupsen/logrus v1.9.3 go: downloading github.com/apenella/go-common-utils/data v0.0.0-20210528133155-34ba915e28c8 go: downloading github.com/apenella/go-common-utils/error v0.0.0-20210528133155-34ba915e28c8 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.4.1 go: downloading github.com/imdario/mergo v0.3.13 go: downloading golang.org/x/term v0.30.0 go: downloading github.com/gorilla/websocket v1.5.0 go: downloading golang.org/x/net v0.37.0 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd go: downloading k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 go: downloading k8s.io/apiextensions-apiserver v0.31.3 go: downloading golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 go: downloading gopkg.in/evanphx/json-patch.v4 v4.12.0 go: downloading github.com/stretchr/testify v1.10.0 go: downloading gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c go: downloading github.com/go-logr/zapr v1.3.0 go: downloading go.uber.org/zap v1.27.0 go: downloading go.uber.org/automaxprocs v1.6.0 go: downloading golang.org/x/sys v0.32.0 go: downloading github.com/blang/semver/v4 v4.0.0 go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da go: downloading go.uber.org/goleak v1.3.0 go: downloading github.com/spf13/cobra v1.8.1 go: downloading k8s.io/cli-runtime v0.31.3 go: downloading k8s.io/component-base v0.31.3 go: downloading github.com/google/gnostic-models v0.6.8 go: downloading google.golang.org/protobuf v1.36.5 go: downloading github.com/golang/protobuf v1.5.4 go: downloading golang.org/x/time v0.9.0 go: downloading github.com/json-iterator/go v1.1.12 go: downloading github.com/moby/spdystream v0.4.0 go: downloading github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 go: downloading golang.org/x/oauth2 v0.27.0 go: downloading github.com/aws/aws-sdk-go-v2 v1.30.3 go: downloading github.com/aws/aws-sdk-go-v2/config v1.26.3 go: downloading github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.11 go: downloading github.com/aws/aws-sdk-go-v2/service/s3 v1.48.0 go: downloading golang.org/x/text v0.23.0 go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc go: downloading github.com/go-openapi/jsonreference v0.20.2 go: downloading github.com/go-openapi/swag v0.22.4 go: downloading github.com/kr/pretty v0.3.1 go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 go: downloading github.com/go-task/slim-sprig/v3 v3.0.0 go: downloading golang.org/x/tools v0.31.0 go: downloading go.uber.org/multierr v1.11.0 go: downloading github.com/fxamacker/cbor/v2 v2.7.0 go: downloading github.com/inconshreveable/mousetrap v1.1.0 go: downloading github.com/kubernetes-csi/external-snapshotter/client/v7 v7.0.0 go: downloading github.com/jonboulle/clockwork v0.2.2 go: downloading k8s.io/component-helpers v0.30.5 go: downloading github.com/daviddengcn/go-colortext v1.0.0 go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de go: downloading github.com/distribution/reference v0.5.0 go: downloading github.com/moby/term v0.5.0 go: downloading sigs.k8s.io/kustomize/kustomize/v5 v5.0.4-0.20230601165947-6ce0bf390ce3 go: downloading sigs.k8s.io/kustomize/kyaml v0.17.1 go: downloading github.com/fvbommel/sortorder v1.1.0 go: downloading github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d go: downloading github.com/lithammer/dedent v1.1.0 go: downloading k8s.io/metrics v0.31.3 go: downloading github.com/chai2010/gettext-go v1.0.2 go: downloading github.com/MakeNowJust/heredoc v1.0.0 go: downloading github.com/mitchellh/go-wordwrap v1.0.1 go: downloading github.com/russross/blackfriday/v2 v2.1.0 go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 go: downloading github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/aws/smithy-go v1.20.3 go: downloading github.com/aws/aws-sdk-go-v2/credentials v1.17.26 go: downloading github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.11 go: downloading github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 go: downloading github.com/aws/aws-sdk-go-v2/service/sso v1.22.3 go: downloading github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4 go: downloading github.com/aws/aws-sdk-go-v2/service/sts v1.30.3 go: downloading github.com/go-openapi/jsonpointer v0.19.6 go: downloading github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 go: downloading github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15 go: downloading github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.3 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.17 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 go: downloading github.com/mailru/easyjson v0.7.7 go: downloading github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 go: downloading github.com/kr/text v0.2.0 go: downloading github.com/rogpeppe/go-internal v1.12.0 go: downloading github.com/x448/float16 v0.8.4 go: downloading golang.org/x/sync v0.12.0 go: downloading sigs.k8s.io/kustomize/api v0.17.2 go: downloading github.com/fatih/camelcase v1.0.0 go: downloading github.com/opencontainers/go-digest v1.0.0 go: downloading github.com/creack/pty v1.1.18 go: downloading github.com/golangplus/testing v1.0.0 go: downloading github.com/spf13/afero v1.10.0 go: downloading github.com/emicklei/go-restful/v3 v3.11.0 go: downloading github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 go: downloading github.com/peterbourgon/diskv v2.0.1+incompatible go: downloading github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.15 go: downloading github.com/josharian/intern v1.0.0 go: downloading github.com/prashantv/gostub v1.1.0 go: downloading github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 go: downloading github.com/prometheus/client_golang v1.20.5 go: downloading github.com/stretchr/objx v0.5.2 go: downloading github.com/go-errors/errors v1.4.2 go: downloading gomodules.xyz/jsonpatch/v2 v2.4.0 go: downloading github.com/prometheus/client_model v0.6.1 go: downloading github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 go: downloading github.com/xlab/treeprint v1.2.0 go: downloading github.com/sergi/go-diff v1.2.0 go: downloading github.com/google/btree v1.0.1 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading go.starlark.net v0.0.0-20230525235612-a134d8f9ddca go: downloading github.com/prometheus/common v0.62.0 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/prometheus/procfs v0.15.1 go: downloading github.com/klauspost/compress v1.17.11 go: downloading github.com/cespare/xxhash/v2 v2.3.0 go: downloading github.com/kylelemons/godebug v1.1.0 go: downloading github.com/jmespath/go-jmespath v0.4.0 go: downloading github.com/jmespath/go-jmespath/internal/testify v1.5.1 storageclass.storage.k8s.io/gp2-csi annotated storageclass.storage.k8s.io/gp3-csi annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd-virtualization annotated storageclass.storage.k8s.io/odf-operator-cephfs annotated storageclass.storage.k8s.io/openshift-storage.noobaa.io annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd annotated + readonly 'RED=\e[31m' + RED='\e[31m' + readonly 'BLUE=\033[34m' + BLUE='\033[34m' + readonly 'CLEAR=\e[39m' + CLEAR='\e[39m' ++ oc get infrastructures cluster -o 'jsonpath={.status.platform}' ++ awk '{print tolower($0)}' + CLOUD_PROVIDER=aws + [[ '' == \t\r\u\e ]] + echo /home/jenkins/.kube/config /home/jenkins/.kube/config + [[ aws == *-arm* ]] + [[ aws == *-fips* ]] + E2E_TIMEOUT_MULTIPLIER=2 + export NAMESPACE=openshift-adp + NAMESPACE=openshift-adp + export PROVIDER=aws + PROVIDER=aws ++ echo aws ++ awk '{print tolower($0)}' + BACKUP_LOCATION=aws + export BACKUP_LOCATION=aws + BACKUP_LOCATION=aws + export BUCKET=ci-op-6fip6j15-interopoadp + BUCKET=ci-op-6fip6j15-interopoadp + OADP_CREDS_FILE=/tmp/test-settings/credentials + OADP_VSL_CREDS_FILE=/tmp/test-settings/aws_vsl_creds +++ readlink -f /alabama/cspi/test_settings/scripts/test_runner.sh ++ dirname /alabama/cspi/test_settings/scripts/test_runner.sh + readonly SCRIPT_DIR=/alabama/cspi/test_settings/scripts + SCRIPT_DIR=/alabama/cspi/test_settings/scripts ++ cd /alabama/cspi/test_settings/scripts ++ git rev-parse --show-toplevel + readonly TOP_DIR=/alabama/cspi + TOP_DIR=/alabama/cspi + echo /alabama/cspi /alabama/cspi + TESTS_FOLDER=/alabama/cspi/e2e/kubevirt-plugin ++ oc get nodes -o 'jsonpath={.items[*].metadata.labels.kubernetes\.io/arch}' ++ tr ' ' '\n' ++ sort -u ++ xargs + export NODES_ARCHITECTURE=amd64 + NODES_ARCHITECTURE=amd64 + export OADP_REPOSITORY=redhat + OADP_REPOSITORY=redhat + SKIP_DPA_CREATION=false ++ oc get ns openshift-storage ++ echo true + OPENSHIFT_STORAGE=true + '[' redhat == upstream-velero ']' + '[' true == true ']' ++ oc get sc ++ awk '$1 ~ /^.+ceph-rbd$/ {print $1}' ++ tail -1 + CEPH_RBD_STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + export CEPH_RBD_STORAGE_CLASS + echo 'ceph-rbd StorageClass found: odf-operator-ceph-rbd' ceph-rbd StorageClass found: odf-operator-ceph-rbd ++ oc get storageclass -o 'jsonpath={range .items[*]}{@.metadata.name}{" "}{@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}{"\n"}{end}' ++ awk '$2=="true"{print $1}' ++ wc -l + NUM_DEFAULT_STORAGE_CLASS=1 + '[' 1 -ne 1 ']' ++ oc get storageclass -o 'jsonpath={.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=='\''true'\'')].metadata.name}' + DEFAULT_SC=odf-operator-ceph-rbd + export STORAGE_CLASS=odf-operator-ceph-rbd + STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + '[' odf-operator-ceph-rbd '!=' odf-operator-ceph-rbd ']' + export STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + echo 'Using the StorageClass for openshift-storage: odf-operator-ceph-rbd' Using the StorageClass for openshift-storage: odf-operator-ceph-rbd + [[ amd64 != \a\m\d\6\4 ]] + TEST_FILTER='!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' + [[ aws =~ ^osp ]] + [[ aws =~ ^vsphere ]] + [[ aws =~ ^gcp-wif ]] + [[ aws =~ ^ibmcloud ]] ++ oc config current-context ++ awk -F / '{print $2}' + SETTINGS_TMP=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 + mkdir -p /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 ++ oc get authentication cluster -o 'jsonpath={.spec.serviceAccountIssuer}' + IS_OIDC= + '[' '!' -z ']' + [[ aws == \a\w\s ]] + export PROVIDER=aws + PROVIDER=aws + export CREDS_SECRET_REF=cloud-credentials + CREDS_SECRET_REF=cloud-credentials ++ oc get infrastructures cluster -o 'jsonpath={.status.platformStatus.aws.region}' --allow-missing-template-keys=false + export REGION=us-east-1 + REGION=us-east-1 + settings_script=aws_settings.sh + '[' aws == aws-sts ']' + BUCKET=ci-op-6fip6j15-interopoadp + TMP_DIR=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 + source /alabama/cspi/test_settings/scripts/aws_settings.sh ++ cat ++ [[ aws == *aws* ]] ++ cat ++ echo -e '\n }\n}' +++ cat /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json ++ x='{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-6fip6j15-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ echo '{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-6fip6j15-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ grep -o '^[^#]*' + FILE_SETTINGS_NAME=settings.json + printf '\033[34mGenerated settings file under /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json\e[39m\n' Generated settings file under /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json + cat /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json ++ oc get volumesnapshotclass -o name + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass annotated ++ ./e2e/must-gather/get-latest-build.sh + oc get configmaps -n default must-gather-image + UPSTREAM_VERSION=99.0.0 ++ oc get OperatorCondition -n openshift-adp -o 'jsonpath={.items[*].metadata.name}' ++ awk -F v '{print $2}' + OADP_VERSION=1.5.0 + '[' -z 1.5.0 ']' + '[' 1.5.0 == 99.0.0 ']' ++ oc get sub redhat-oadp-operator -n openshift-adp -o 'jsonpath={.spec.source}' + OADP_REPO=redhat-operators + '[' -z redhat-operators ']' + '[' redhat-operators == redhat-operators ']' + REGISTRY_PATH=registry.redhat.io/oadp/oadp-mustgather-rhel9: + TAG=1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + echo registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + '[' -z registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 ']' + export NUM_OF_OADP_INSTANCES=1 + NUM_OF_OADP_INSTANCES=1 ++ echo --skip=tc-id:OADP-555 ++ tr ' ' '\n' ++ grep '^--' ++ tr '\n' ' ' + GINKO_PARAM='--skip=tc-id:OADP-555 ' ++ echo --skip=tc-id:OADP-555 ++ tr ' ' '\n' ++ grep '^-' ++ grep -v '^--' ++ tr '\n' ' ' + TEST_PARAM= + ginkgo run --nodes=1 -mod=mod --show-node-events --flake-attempts 3 --junit-report=/logs/artifacts/junit_oadp_cnv_results.xml '--label-filter=!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' --skip=tc-id:OADP-555 -p /alabama/cspi/e2e/kubevirt-plugin/ -- -credentials_file=/tmp/test-settings/credentials -vsl_credentials_file=/tmp/test-settings/aws_vsl_creds -oadp_namespace=openshift-adp -settings=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json -must_gather_image=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 -timeout_multiplier=2 -skip_dpa_creation=false 2025/08/11 07:35:19 maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined 2025/08/11 07:36:00 Setting up clients 2025/08/11 07:36:00 Getting default StorageClass... 2025/08/11 07:36:00 Checking default storage class count Run the command: oc get sc 2025/08/11 07:36:00 Got default StorageClass odf-operator-ceph-rbd 2025/08/11 07:36:00 oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 50m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 50m odf-operator-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 6m odf-operator-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 6m odf-operator-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 6m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 2m8s 2025/08/11 07:36:00 Using velero prefix: velero-e2e-kubevirt-d3bdda01-7685-11f0-8ee3-0a580a83369f Running Suite: OADP E2E Virtualization Workloads Suite - /alabama/cspi/e2e/kubevirt-plugin ========================================================================================== Random Seed: 1754897719 Will run 4 of 5 specs ------------------------------ [BeforeSuite]  /alabama/cspi/e2e/kubevirt-plugin/kubevirt_suite_test.go:62 > Enter [BeforeSuite] TOP-LEVEL @ 08/11/25 07:36:00.366 < Exit [BeforeSuite] TOP-LEVEL @ 08/11/25 07:36:00.383 (17ms) [BeforeSuite] PASSED [0.017 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:35 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:36:00.384 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:36:00.395 (11ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:36:00.395 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:36:00.395 (0s) > Enter [It] [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed @ 08/11/25 07:36:00.395 2025/08/11 07:36:00 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 07:36:00.4 2025/08/11 07:36:00 csi 2025/08/11 07:36:00 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "1681e654-a95e-4ab5-b6c9-7fbb90f63bf8", "resourceVersion": "65086", "generation": 1, "creationTimestamp": "2025-08-11T07:36:00Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:36:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:36:00.426 2025/08/11 07:36:00 Waiting for velero pod to be running 2025/08/11 07:36:05 pod: velero-86964b4444-cqrlr is not yet running with status: {Pending [{PodReadyToStartContainers False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC } {Initialized False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotInitialized containers with incomplete status: [openshift-velero-plugin velero-plugin-for-aws kubevirt-velero-plugin]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC }] 10.0.114.0 [{10.0.114.0}] [] 2025-08-11 07:36:00 +0000 UTC [{openshift-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd 0xc000a74e8a map[] nil [] nil []} {velero-plugin-for-aws {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 0xc000a74e8b map[] nil [] nil []} {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d 0xc000a74e8c map[] nil [] nil []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000a74ed6 map[] nil [] nil []}] Burstable [] []} 2025/08/11 07:36:10 pod: velero-86964b4444-cqrlr is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:05 +0000 UTC } {Initialized False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotInitialized containers with incomplete status: [kubevirt-velero-plugin]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC }] 10.0.114.0 [{10.0.114.0}] 10.131.0.85 [{10.131.0.85}] 2025-08-11 07:36:00 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-08-11 07:36:04 +0000 UTC,FinishedAt:2025-08-11 07:36:04 +0000 UTC,ContainerID:cri-o://fb11117358ccfe00f6194587256cce9e6e02ad0aa5883336ebd20b8dd096df24,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://fb11117358ccfe00f6194587256cce9e6e02ad0aa5883336ebd20b8dd096df24 0xc000a756e9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dc8190}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-08-11 07:36:08 +0000 UTC,FinishedAt:2025-08-11 07:36:08 +0000 UTC,ContainerID:cri-o://d649172c19e35053667b23a49d5abfd2ab45f2d4da50522a3025005afed25c36,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://d649172c19e35053667b23a49d5abfd2ab45f2d4da50522a3025005afed25c36 0xc000a75748 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dc8200}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d 0xc000a757da map[] nil [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dc8210}] nil []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000a7581e map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000dc8220} {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dc8230}] nil []}] Burstable [] []} 2025/08/11 07:36:15 pod: velero-86964b4444-cqrlr is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:05 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:36:00 +0000 UTC }] 10.0.114.0 [{10.0.114.0}] 10.131.0.85 [{10.131.0.85}] 2025-08-11 07:36:00 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-08-11 07:36:04 +0000 UTC,FinishedAt:2025-08-11 07:36:04 +0000 UTC,ContainerID:cri-o://fb11117358ccfe00f6194587256cce9e6e02ad0aa5883336ebd20b8dd096df24,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://fb11117358ccfe00f6194587256cce9e6e02ad0aa5883336ebd20b8dd096df24 0xc000f14059 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000f16080}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-08-11 07:36:08 +0000 UTC,FinishedAt:2025-08-11 07:36:08 +0000 UTC,ContainerID:cri-o://d649172c19e35053667b23a49d5abfd2ab45f2d4da50522a3025005afed25c36,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://d649172c19e35053667b23a49d5abfd2ab45f2d4da50522a3025005afed25c36 0xc000f140b8 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000f160f0}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-08-11 07:36:11 +0000 UTC,FinishedAt:2025-08-11 07:36:11 +0000 UTC,ContainerID:cri-o://8f0b89bf87f93207fc004d886e13e09e20d3b33bacab906ec5f5a3e7388e8645,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d cri-o://8f0b89bf87f93207fc004d886e13e09e20d3b33bacab906ec5f5a3e7388e8645 0xc000f14169 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000f16160}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000f141ce map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000f16170} {kube-api-access-6k8m2 /var/run/secrets/kubernetes.io/serviceaccount true 0xc000f16180}] nil []}] Burstable [] []} 2025/08/11 07:36:20 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:36:20.469 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:36:20 The 'openshift-storage' namespace exists 2025/08/11 07:36:20 Checking default storage class count 2025/08/11 07:36:20 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:36:20 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:36:20 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:36:20 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-kubevirt @ 08/11/25 07:36:20.784 2025/08/11 07:36:20 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } /usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 'api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=18  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025/08/11 07:37:33 2025-08-11 07:36:22,272 p=19975 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:36:22,272 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:22,537 p=19975 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:36:22,538 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:22,780 p=19975 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:36:22,780 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:23,024 p=19975 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:36:23,024 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:23,039 p=19975 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:36:23,040 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:23,057 p=19975 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:36:23,058 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:23,069 p=19975 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:36:23,069 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:36:23,415 p=19975 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:36:23,415 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:23,441 p=19975 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:36:23,442 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:23,458 p=19975 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:36:23,458 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:23,460 p=19975 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:36:24,005 p=19975 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:36:24,005 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:36:24,811 p=19975 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-08-11 07:36:24,811 p=19975 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:36:24,811 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:25,473 p=19975 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-08-11 07:36:25,473 p=19975 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:36:26,215 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-08-11 07:36:31,833 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-08-11 07:36:37,442 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-08-11 07:36:43,042 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-08-11 07:36:48,629 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-08-11 07:36:54,229 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-08-11 07:36:59,801 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-08-11 07:37:05,437 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). 2025-08-11 07:37:11,026 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). 2025-08-11 07:37:16,619 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). 2025-08-11 07:37:22,232 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). 2025-08-11 07:37:27,858 p=19975 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). 2025-08-11 07:37:33,464 p=19975 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:37:33,464 p=19975 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:33,555 p=19975 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:37:33,556 p=19975 u=1002120000 n=ansible INFO| localhost : ok=18 changed=6 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:37:33.6 2025/08/11 07:37:33 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/08/11 07:37:49 2025-08-11 07:37:35,121 p=20373 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:37:35,121 p=20373 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:37:35,368 p=20373 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:37:35,368 p=20373 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:37:35,616 p=20373 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:37:35,616 p=20373 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:37:35,896 p=20373 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:37:35,896 p=20373 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:37:35,910 p=20373 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:37:35,910 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:35,928 p=20373 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:37:35,928 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:35,941 p=20373 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:37:35,941 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:37:36,235 p=20373 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:37:36,235 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:36,263 p=20373 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:37:36,263 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:36,280 p=20373 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:37:36,280 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:36,282 p=20373 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:37:36,839 p=20373 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:37:36,839 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:37,758 p=20373 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:37:37,758 p=20373 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:37:37,758 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:38,424 p=20373 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-08-11 07:37:44,027 p=20373 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-08-11 07:37:49,654 p=20373 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-08-11 07:37:49,654 p=20373 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:37:49,659 p=20373 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:37:49,659 p=20373 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2025/08/11 07:37:49 {{ } { } [{{ } {test-vm-dv test-oadp-185 aa559eab-0af1-4fcf-8098-b96ec389a0ad 67270 0 2025-08-11 07:36:25 +0000 UTC map[app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.1 kubevirt.io/created-by:3b5ec601-2fde-4e3c-8388-f8d52da81b9f] map[cdi.kubevirt.io/allowClaimAdoption:true cdi.kubevirt.io/createdForDataVolume:e2487ad6-aa5a-46b4-9e37-bcb3cac1b8ea cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:test-vm-dv-1754897867 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-vm-dv e2487ad6-aa5a-46b4-9e37-bcb3cac1b8ea 0xc000f147aa 0xc000f147ab}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-08-11 07:37:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 07:37:15 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-08-11 07:37:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/allowClaimAdoption":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:kubevirt.io/created-by":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2487ad6-aa5a-46b4-9e37-bcb3cac1b8ea\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 07:37:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{5368709120 0} {} 5Gi BinarySI}]} pvc-166cd8d4-af5b-4524-bbc4-a3a7ced212e4 0xc000f16990 0xc000f169a0 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-e2487ad6-aa5a-46b4-9e37-bcb3cac1b8ea,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-e2487ad6-aa5a-46b4-9e37-bcb3cac1b8ea,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{5368709120 0} {} 5Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f @ 08/11/25 07:37:49.708 2025/08/11 07:37:49 Wait until backup ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f is completed backup phase: Completed 2025/08/11 07:38:09 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 07:38:09 Run velero describe on the backup 2025/08/11 07:38:09 [./velero describe backup ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 07:38:12 Exec stderr: "" 2025/08/11 07:38:12 Name: ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-185 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 07:37:49 +0000 UTC Completed: 2025-08-11 07:37:58 +0000 UTC Expiration: 2025-09-10 07:37:49 +0000 UTC Total items to be backed up: 86 Items backed up: 86 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-185/velero-test-vm-dv-swcjw: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-185/velero-test-vm-dv-swcjw/2025-08-11T07:37:56Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-185/velero-test-vm-dv-swcjw volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-0251d9bc-7f3b-4d40-a77f-58f4d73d94d2 Phase: Completed Created: 2025-08-11 07:37:56 +0000 UTC Started: 2025-08-11 07:37:56 +0000 UTC Updated: 2025-08-11 07:37:57 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io - reclaimspacecronjobs.csiaddons.openshift.io - virtualmachineinstances.kubevirt.io - virtualmachines.kubevirt.io apps/v1/ControllerRevision: - test-oadp-185/revision-start-vm-3b5ec601-2fde-4e3c-8388-f8d52da81b9f-1 authorization.openshift.io/v1/RoleBinding: - test-oadp-185/system:deployers - test-oadp-185/system:image-builders - test-oadp-185/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-185/test-vm-dv csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - test-oadp-185/test-vm-dv-1754897867 kubevirt.io/v1/VirtualMachine: - test-oadp-185/test-vm kubevirt.io/v1/VirtualMachineInstance: - test-oadp-185/test-vm policy/v1/PodDisruptionBudget: - test-oadp-185/kubevirt-disruption-budget-892s2 rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-185/system:deployers - test-oadp-185/system:image-builders - test-oadp-185/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-185/velero-test-vm-dv-swcjw snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-0251d9bc-7f3b-4d40-a77f-58f4d73d94d2 v1/ConfigMap: - test-oadp-185/kube-root-ca.crt - test-oadp-185/openshift-service-ca.crt v1/Event: - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63cf5c2f4f8 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f27ceb870 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f37065c3d - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f58a4a8ae - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64120ae7763 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64120aecae6 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64140c58e51 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64142637651 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa641466c9324 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64146d3ca5e - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa641750aa538 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64179895ec8 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64179ea6000 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64179f5d21a - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa6444cecefbf - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa64452134d12 - test-oadp-185/importer-prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa644527b70f3 - test-oadp-185/kubevirt-disruption-budget-892s2.185aa6488ef9281f - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63cf49f525c - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f27ced68c - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f27cf6d5b - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa63f355d9ca0 - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa6488aa2afb1 - test-oadp-185/prime-aa559eab-0af1-4fcf-8098-b96ec389a0ad.185aa649a205d6cd - test-oadp-185/test-vm-dv.185aa63cf3eaf746 - test-oadp-185/test-vm-dv.185aa63cf428b08e - test-oadp-185/test-vm-dv.185aa63cf498f182 - test-oadp-185/test-vm-dv.185aa63f27a0e3c7 - test-oadp-185/test-vm-dv.185aa63f27a74b58 - test-oadp-185/test-vm-dv.185aa63f27a7680e - test-oadp-185/test-vm-dv.185aa63f3b0cf69e - test-oadp-185/test-vm-dv.185aa6448518f9bc - test-oadp-185/test-vm-dv.185aa64807e8b58a - test-oadp-185/test-vm-dv.185aa6488bc2cdfe - test-oadp-185/test-vm-dv.185aa6488c049112 - test-oadp-185/test-vm-dv.185aa6488e1538a2 - test-oadp-185/test-vm.185aa63cf1db3b4b - test-oadp-185/test-vm.185aa6488e54fc61 - test-oadp-185/test-vm.185aa6488f31371f - test-oadp-185/test-vm.185aa648917bd66a - test-oadp-185/test-vm.185aa64bc5f7453c - test-oadp-185/test-vm.185aa64bc64ac791 - test-oadp-185/test-vm.185aa64bc8ba8374 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64891f27e36 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64895298cb6 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64adf892624 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64afb03bfad - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64afb045165 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b18e241fd - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b1a5517dd - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b23cbcbc3 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b24412400 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b244f5eb7 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b4f84c4c3 - test-oadp-185/virt-launcher-test-vm-nld2j.185aa64b5005b3e4 v1/Namespace: - test-oadp-185 v1/PersistentVolume: - pvc-166cd8d4-af5b-4524-bbc4-a3a7ced212e4 v1/PersistentVolumeClaim: - test-oadp-185/test-vm-dv v1/Pod: - test-oadp-185/virt-launcher-test-vm-nld2j v1/Secret: - test-oadp-185/builder-dockercfg-rxp79 - test-oadp-185/default-dockercfg-k28ml - test-oadp-185/deployer-dockercfg-ld45j v1/ServiceAccount: - test-oadp-185/builder - test-oadp-185/default - test-oadp-185/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-185/test-vm-dv: Snapshot: Operation ID: test-oadp-185/velero-test-vm-dv-swcjw/2025-08-11T07:37:56Z Snapshot Content Name: snapcontent-0251d9bc-7f3b-4d40-a77f-58f4d73d94d2 Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-cca08411-7952-41bc-b775-89f07758d963 Snapshot Size (bytes): 5368709120 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 2 HooksFailed: 0 STEP: Verify backup ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f has completed successfully @ 08/11/25 07:38:12.318 2025/08/11 07:38:12 Backup for case ocp-kubevirt succeeded STEP: Delete the appplication resources ocp-kubevirt @ 08/11/25 07:38:12.364 STEP: Cleanup Application for case ocp-kubevirt @ 08/11/25 07:38:12.364 2025/08/11 07:38:12 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:38:41 2025-08-11 07:38:13,847 p=20630 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:38:13,847 p=20630 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:38:14,118 p=20630 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:38:14,118 p=20630 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:38:14,365 p=20630 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:38:14,366 p=20630 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:38:14,610 p=20630 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:38:14,610 p=20630 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:38:14,626 p=20630 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:38:14,626 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:14,643 p=20630 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:38:14,644 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:14,655 p=20630 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:38:14,655 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:38:14,981 p=20630 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:38:14,981 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:15,017 p=20630 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:38:15,017 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:15,039 p=20630 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:38:15,039 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:15,041 p=20630 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:38:15,603 p=20630 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:38:15,603 p=20630 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:38:41,417 p=20630 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** 2025-08-11 07:38:41,417 p=20630 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:38:41,417 p=20630 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:38:41,585 p=20630 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:38:41,585 p=20630 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/08/11 07:38:41 Creating restore ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f for case ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f STEP: Create restore ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f from backup ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f @ 08/11/25 07:38:41.635 2025/08/11 07:38:41 Wait until restore ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f is complete restore phase: Finalizing restore phase: Completed STEP: Verify restore ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369fhas completed successfully @ 08/11/25 07:39:01.668 STEP: Verify Application restore @ 08/11/25 07:39:01.677 STEP: Verify Application deployment for case ocp-kubevirt @ 08/11/25 07:39:01.677 2025/08/11 07:39:01 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/08/11 07:39:17 2025-08-11 07:39:03,139 p=20848 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:39:03,139 p=20848 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:03,387 p=20848 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:39:03,387 p=20848 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:03,632 p=20848 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:39:03,633 p=20848 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:03,911 p=20848 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:39:03,911 p=20848 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:03,926 p=20848 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:39:03,926 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:03,943 p=20848 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:39:03,944 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:03,956 p=20848 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:39:03,957 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:39:04,259 p=20848 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:39:04,259 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:04,286 p=20848 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:39:04,286 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:04,302 p=20848 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:39:04,302 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:04,304 p=20848 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:39:04,843 p=20848 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:39:04,844 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:05,774 p=20848 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:39:05,774 p=20848 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:39:05,775 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:06,444 p=20848 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-08-11 07:39:12,054 p=20848 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-08-11 07:39:17,652 p=20848 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-08-11 07:39:17,652 p=20848 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:17,656 p=20848 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:39:17,656 p=20848 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed @ 08/11/25 07:39:17.698 (3m17.303s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:39:17.698 2025/08/11 07:39:17 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:39:17.698 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:17.698 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:17.702 (4ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:17.702 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:17.702 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:17.702 2025/08/11 07:39:17 Cleaning app 2025/08/11 07:39:17 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:39:47 2025-08-11 07:39:19,170 p=21102 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:39:19,170 p=21102 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:19,420 p=21102 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:39:19,421 p=21102 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:19,665 p=21102 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:39:19,665 p=21102 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:19,918 p=21102 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:39:19,918 p=21102 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:19,933 p=21102 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:39:19,933 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:19,951 p=21102 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:39:19,951 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:19,964 p=21102 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:39:19,964 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:39:20,305 p=21102 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:39:20,306 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:20,335 p=21102 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:39:20,336 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:20,354 p=21102 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:39:20,354 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:20,356 p=21102 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:39:20,952 p=21102 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:39:20,953 p=21102 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:46,815 p=21102 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** 2025-08-11 07:39:46,816 p=21102 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:39:46,816 p=21102 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:46,972 p=21102 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:39:46,972 p=21102 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:47.013 (29.311s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:47.013 2025/08/11 07:39:47 Cleaning setup resources for the backup 2025/08/11 07:39:47 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:39:47 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:39:47 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:47.052 (39ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:47.052 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:39:47.062 (10ms) • [226.678 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-186] [kubevirt] Stopped VM should be restored /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:52 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:39:47.062 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:39:47.079 (17ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:39:47.079 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:39:47.079 (0s) > Enter [It] [tc-id:OADP-186] [kubevirt] Stopped VM should be restored @ 08/11/25 07:39:47.079 2025/08/11 07:39:47 Delete all downloadrequest ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-813b13d4-f2d3-4885-85ba-6c12d3e6b7fb ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-ee3841a2-6b93-483a-a1f0-88a5b950fb8f ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-f12b786d-aaf0-4119-8d9d-d27ed6edaff6 STEP: Create DPA CR @ 08/11/25 07:39:47.166 2025/08/11 07:39:47 csi 2025/08/11 07:39:47 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "69b802a9-2c35-4a63-80e2-46ae029c35c5", "resourceVersion": "69422", "generation": 1, "creationTimestamp": "2025-08-11T07:39:47Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:39:47Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:39:47.191 2025/08/11 07:39:47 Waiting for velero pod to be running 2025/08/11 07:39:47 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 07:39:47 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "69b802a9-2c35-4a63-80e2-46ae029c35c5", "resourceVersion": "69422", "generation": 1, "creationTimestamp": "2025-08-11T07:39:47Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:39:47Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:39:52.208 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:39:52 The 'openshift-storage' namespace exists 2025/08/11 07:39:52 Checking default storage class count 2025/08/11 07:39:52 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:39:52 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:39:52 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:39:52 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-kubevirt @ 08/11/25 07:39:52.527 2025/08/11 07:39:52 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Shutdown the VM if required] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM status to become 'Stopped'] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=20  changed=7  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/08/11 07:40:38 2025-08-11 07:39:53,999 p=21339 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:39:53,999 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:54,244 p=21339 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:39:54,244 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:54,492 p=21339 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:39:54,492 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:54,733 p=21339 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:39:54,733 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:54,748 p=21339 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:39:54,748 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:54,766 p=21339 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:39:54,766 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:54,777 p=21339 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:39:54,778 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:39:55,083 p=21339 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:39:55,084 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:55,110 p=21339 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:39:55,110 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:55,128 p=21339 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:39:55,129 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:55,130 p=21339 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:39:55,683 p=21339 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:39:55,683 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:39:56,541 p=21339 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-08-11 07:39:56,542 p=21339 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:39:56,542 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:57,209 p=21339 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-08-11 07:39:57,209 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:39:57,955 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-08-11 07:40:03,524 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-08-11 07:40:09,102 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-08-11 07:40:14,694 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-08-11 07:40:20,294 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-08-11 07:40:25,912 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-08-11 07:40:31,489 p=21339 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-08-11 07:40:37,097 p=21339 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:40:37,097 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:37,756 p=21339 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Shutdown the VM if required] *** 2025-08-11 07:40:37,756 p=21339 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:40:38,372 p=21339 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM status to become 'Stopped'] *** 2025-08-11 07:40:38,372 p=21339 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:38,432 p=21339 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:40:38,432 p=21339 u=1002120000 n=ansible INFO| localhost : ok=20 changed=7 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:40:38.475 2025/08/11 07:40:38 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:40:42 2025-08-11 07:40:39,983 p=21699 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:40:39,983 p=21699 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:40:40,233 p=21699 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:40:40,233 p=21699 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:40:40,471 p=21699 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:40:40,471 p=21699 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:40:40,718 p=21699 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:40:40,718 p=21699 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:40:40,733 p=21699 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:40:40,733 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:40,755 p=21699 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:40:40,755 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:40,768 p=21699 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:40:40,769 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:40:41,089 p=21699 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:40:41,089 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:41,118 p=21699 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:40:41,119 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:41,138 p=21699 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:40:41,138 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:41,139 p=21699 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:40:41,703 p=21699 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:40:41,704 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:42,544 p=21699 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** 2025-08-11 07:40:42,544 p=21699 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:40:42,544 p=21699 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:40:42,586 p=21699 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:40:42,586 p=21699 u=1002120000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/08/11 07:40:42 {{ } { } [{{ } {test-vm-dv test-oadp-186 d09bfc91-34a6-4eb3-a64a-7c6306bbe960 70554 0 2025-08-11 07:39:57 +0000 UTC map[app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.1 kubevirt.io/created-by:6b7649cf-04bb-4809-9429-cfb6cc756353] map[cdi.kubevirt.io/allowClaimAdoption:true cdi.kubevirt.io/createdForDataVolume:593644f9-8dea-4f60-b656-7a3f1b59942f cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:test-vm-dv-1754898038 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-vm-dv 593644f9-8dea-4f60-b656-7a3f1b59942f 0xc000df834a 0xc000df834b}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-08-11 07:40:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 07:40:27 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-08-11 07:40:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/allowClaimAdoption":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:kubevirt.io/created-by":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"593644f9-8dea-4f60-b656-7a3f1b59942f\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 07:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{5368709120 0} {} 5Gi BinarySI}]} pvc-8ca33f6f-bd80-4742-9291-d627db6ee410 0xc000280b30 0xc000280b40 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-593644f9-8dea-4f60-b656-7a3f1b59942f,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-593644f9-8dea-4f60-b656-7a3f1b59942f,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{5368709120 0} {} 5Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:40:42.638 2025/08/11 07:40:42 Wait until backup ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f is completed backup phase: Completed 2025/08/11 07:41:02 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 07:41:02 Run velero describe on the backup 2025/08/11 07:41:02 [./velero describe backup ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 07:41:03 Exec stderr: "" 2025/08/11 07:41:03 Name: ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-186 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 07:40:42 +0000 UTC Completed: 2025-08-11 07:40:51 +0000 UTC Expiration: 2025-09-10 07:40:42 +0000 UTC Total items to be backed up: 86 Items backed up: 86 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-186/velero-test-vm-dv-vk74b: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-186/velero-test-vm-dv-vk74b/2025-08-11T07:40:50Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-186/velero-test-vm-dv-vk74b volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-a9168088-ee96-43ea-a76a-752c9e63874c Phase: Completed Created: 2025-08-11 07:40:50 +0000 UTC Started: 2025-08-11 07:40:50 +0000 UTC Updated: 2025-08-11 07:40:50 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io - reclaimspacecronjobs.csiaddons.openshift.io - virtualmachines.kubevirt.io authorization.openshift.io/v1/RoleBinding: - test-oadp-186/system:deployers - test-oadp-186/system:image-builders - test-oadp-186/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-186/test-vm-dv csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - test-oadp-186/test-vm-dv-1754898038 kubevirt.io/v1/VirtualMachine: - test-oadp-186/test-vm rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-186/system:deployers - test-oadp-186/system:image-builders - test-oadp-186/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-186/velero-test-vm-dv-vk74b snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-a9168088-ee96-43ea-a76a-752c9e63874c v1/ConfigMap: - test-oadp-186/kube-root-ca.crt - test-oadp-186/openshift-service-ca.crt v1/Event: - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa66e40f5913f - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6700d1a8fb6 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6701ccf5f4e - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6703dc9fcb9 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670aa0919e1 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670aa09a9db - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670c0b8fbe2 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670c2338a63 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670c675499c - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa670c6f21937 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa67104761934 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa671090d3904 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa671099a34b2 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa67109a5dde2 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6710e037288 - test-oadp-186/importer-prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6710e6e5fe4 - test-oadp-186/kubevirt-disruption-budget-sksm4.185aa6754b62521d - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa66e4022cac7 - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6700d19d4c8 - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6700d1b69da - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa6701b2c3ac0 - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa675488c50cf - test-oadp-186/prime-d09bfc91-34a6-4eb3-a64a-7c6306bbe960.185aa677096ccb1d - test-oadp-186/test-vm-dv.185aa66e3f412544 - test-oadp-186/test-vm-dv.185aa66e3f63dc15 - test-oadp-186/test-vm-dv.185aa66e3f63f621 - test-oadp-186/test-vm-dv.185aa66e3f682e5f - test-oadp-186/test-vm-dv.185aa66e3f6fa112 - test-oadp-186/test-vm-dv.185aa67020638ce7 - test-oadp-186/test-vm-dv.185aa67143bc3085 - test-oadp-186/test-vm-dv.185aa674c69e7e41 - test-oadp-186/test-vm-dv.185aa6754975774a - test-oadp-186/test-vm-dv.185aa67549d0363d - test-oadp-186/test-vm-dv.185aa6754bc6963c - test-oadp-186/test-vm.185aa66e3cffcf4e - test-oadp-186/test-vm.185aa6754b0e11ea - test-oadp-186/test-vm.185aa6754b6005ae - test-oadp-186/test-vm.185aa6754e1f898f - test-oadp-186/test-vm.185aa6775b9fdfff - test-oadp-186/test-vm.185aa6775bff10ad - test-oadp-186/test-vm.185aa6775e31836c - test-oadp-186/test-vm.185aa677abad22e0 - test-oadp-186/test-vm.185aa677abbfe9e1 - test-oadp-186/test-vm.185aa677abe6eacb - test-oadp-186/test-vm.185aa677b81524e4 - test-oadp-186/test-vm.185aa677b86bc45c - test-oadp-186/test-vm.185aa677b9b5f9ac - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6754ed4828c - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa67553d510be - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa675b14d259d - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6767463c233 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa676746406f6 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6769348ad43 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa67694c31466 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6769e279310 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6769ea9b705 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa6769eb69971 - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa676ceaa1adc - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa676cf270e8e - test-oadp-186/virt-launcher-test-vm-qjz6k.185aa677b4454a07 v1/Namespace: - test-oadp-186 v1/PersistentVolume: - pvc-8ca33f6f-bd80-4742-9291-d627db6ee410 v1/PersistentVolumeClaim: - test-oadp-186/test-vm-dv v1/Secret: - test-oadp-186/builder-dockercfg-mvdl8 - test-oadp-186/default-dockercfg-vz8bl - test-oadp-186/deployer-dockercfg-b7js5 v1/ServiceAccount: - test-oadp-186/builder - test-oadp-186/default - test-oadp-186/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-186/test-vm-dv: Snapshot: Operation ID: test-oadp-186/velero-test-vm-dv-vk74b/2025-08-11T07:40:50Z Snapshot Content Name: snapcontent-a9168088-ee96-43ea-a76a-752c9e63874c Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-241c909a-a79f-4fe4-9059-433f79c68ed4 Snapshot Size (bytes): 5368709120 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f has completed successfully @ 08/11/25 07:41:03.138 2025/08/11 07:41:03 Backup for case ocp-kubevirt succeeded STEP: Delete the appplication resources ocp-kubevirt @ 08/11/25 07:41:03.187 STEP: Cleanup Application for case ocp-kubevirt @ 08/11/25 07:41:03.187 2025/08/11 07:41:03 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:41:17 2025-08-11 07:41:04,663 p=21921 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:41:04,663 p=21921 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:04,916 p=21921 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:41:04,916 p=21921 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:05,185 p=21921 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:41:05,185 p=21921 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:05,433 p=21921 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:41:05,433 p=21921 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:05,447 p=21921 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:41:05,447 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:05,464 p=21921 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:41:05,464 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:05,475 p=21921 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:41:05,475 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:41:05,760 p=21921 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:41:05,760 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:05,786 p=21921 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:41:05,786 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:05,802 p=21921 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:41:05,803 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:05,804 p=21921 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:41:06,343 p=21921 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:41:06,343 p=21921 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:17,153 p=21921 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** 2025-08-11 07:41:17,153 p=21921 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:41:17,154 p=21921 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:17,307 p=21921 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:41:17,307 p=21921 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/08/11 07:41:17 Creating restore ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f for case ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f STEP: Create restore ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f from backup ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:41:17.348 2025/08/11 07:41:17 Wait until restore ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f is complete restore phase: Finalizing restore phase: Completed STEP: Verify restore ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369fhas completed successfully @ 08/11/25 07:41:37.383 STEP: Verify Application restore @ 08/11/25 07:41:37.386 STEP: Verify Application deployment for case ocp-kubevirt @ 08/11/25 07:41:37.386 2025/08/11 07:41:37 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:41:41 2025-08-11 07:41:38,835 p=22138 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:41:38,836 p=22138 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:39,076 p=22138 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:41:39,076 p=22138 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:39,317 p=22138 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:41:39,317 p=22138 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:39,572 p=22138 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:41:39,573 p=22138 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:39,587 p=22138 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:41:39,587 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:39,604 p=22138 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:41:39,605 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:39,616 p=22138 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:41:39,616 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:41:39,904 p=22138 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:41:39,905 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:39,930 p=22138 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:41:39,930 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:39,947 p=22138 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:41:39,947 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:39,949 p=22138 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:41:40,492 p=22138 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:41:40,492 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:41,357 p=22138 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** 2025-08-11 07:41:41,357 p=22138 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:41:41,357 p=22138 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:41,404 p=22138 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:41:41,404 p=22138 u=1002120000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-186] [kubevirt] Stopped VM should be restored @ 08/11/25 07:41:41.447 (1m54.368s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:41:41.447 2025/08/11 07:41:41 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:41:41.447 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:41:41.447 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:41:41.452 (6ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:41:41.452 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:41:41.452 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:41:41.452 2025/08/11 07:41:41 Cleaning app 2025/08/11 07:41:41 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:42:05 2025-08-11 07:41:42,918 p=22356 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:41:42,918 p=22356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:43,167 p=22356 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:41:43,167 p=22356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:43,416 p=22356 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:41:43,416 p=22356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:43,682 p=22356 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:41:43,682 p=22356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:41:43,700 p=22356 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:41:43,700 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:43,719 p=22356 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:41:43,719 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:43,731 p=22356 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:41:43,731 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:41:44,026 p=22356 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:41:44,026 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:44,052 p=22356 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:41:44,052 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:44,070 p=22356 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:41:44,070 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:41:44,072 p=22356 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:41:44,619 p=22356 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:41:44,619 p=22356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:05,423 p=22356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** 2025-08-11 07:42:05,424 p=22356 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:42:05,424 p=22356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:05,588 p=22356 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:42:05,588 p=22356 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:42:05.633 (24.18s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:42:05.633 2025/08/11 07:42:05 Cleaning setup resources for the backup 2025/08/11 07:42:05 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:42:05 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:42:05 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:42:05.672 (39ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:42:05.672 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:42:05.68 (8ms) • [138.618 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-187] [kubevirt] Backup-restore data volume /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:69 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:42:05.68 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 08/11/25 07:42:05.688 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:42:05.688 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:42:05.688 (0s) > Enter [It] [tc-id:OADP-187] [kubevirt] Backup-restore data volume @ 08/11/25 07:42:05.688 2025/08/11 07:42:05 Delete all downloadrequest ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-03e48081-5244-4e7d-84a2-ee047ed4c06a ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-72c21590-8712-4781-8944-1df54311f081 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-d8715ca5-e08f-488b-a9d4-d0c7bba0246a STEP: Create DPA CR @ 08/11/25 07:42:05.763 2025/08/11 07:42:05 csi 2025/08/11 07:42:05 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "1b238e06-480f-40e2-8328-f0706afb17d8", "resourceVersion": "72071", "generation": 1, "creationTimestamp": "2025-08-11T07:42:05Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:42:05Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:42:05.801 2025/08/11 07:42:05 Waiting for velero pod to be running 2025/08/11 07:42:05 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 07:42:05 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "1b238e06-480f-40e2-8328-f0706afb17d8", "resourceVersion": "72071", "generation": 1, "creationTimestamp": "2025-08-11T07:42:05Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:42:05Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:42:10.819 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:42:10 The 'openshift-storage' namespace exists 2025/08/11 07:42:10 Checking default storage class count 2025/08/11 07:42:10 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:42:10 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:42:11 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:42:11 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-datavolume @ 08/11/25 07:42:11.034 2025/08/11 07:42:11 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Deploy DataVolume test-dv] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 07:42:16 2025-08-11 07:42:12,589 p=22594 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:42:12,589 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:12,849 p=22594 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:42:12,849 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:13,104 p=22594 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:42:13,104 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:13,359 p=22594 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:42:13,359 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:13,377 p=22594 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:42:13,377 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:13,398 p=22594 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:42:13,398 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:13,414 p=22594 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:42:13,415 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:42:13,748 p=22594 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:42:13,748 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:13,779 p=22594 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:42:13,779 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:13,799 p=22594 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:42:13,800 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:13,802 p=22594 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:42:14,405 p=22594 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:42:14,405 p=22594 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:15,311 p=22594 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Create namespace] *** 2025-08-11 07:42:15,312 p=22594 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:42:15,312 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:16,023 p=22594 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Deploy DataVolume test-dv] *** 2025-08-11 07:42:16,024 p=22594 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:16,064 p=22594 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:42:16,064 p=22594 u=1002120000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:42:16.109 2025/08/11 07:42:16 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until there is only one pvc (60 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 07:42:37 2025-08-11 07:42:17,679 p=22819 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:42:17,679 p=22819 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:17,951 p=22819 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:42:17,951 p=22819 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:18,206 p=22819 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:42:18,206 p=22819 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:18,449 p=22819 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:42:18,449 p=22819 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:18,463 p=22819 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:42:18,463 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:18,480 p=22819 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:42:18,481 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:18,492 p=22819 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:42:18,492 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:42:18,793 p=22819 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:42:18,793 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:18,820 p=22819 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:42:18,820 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:18,837 p=22819 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:42:18,837 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:18,838 p=22819 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:42:19,416 p=22819 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:42:19,416 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:20,382 p=22819 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (30 retries left). 2025-08-11 07:42:31,041 p=22819 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** 2025-08-11 07:42:31,041 p=22819 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:42:31,041 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:31,729 p=22819 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until there is only one pvc (60 retries left). 2025-08-11 07:42:37,392 p=22819 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** 2025-08-11 07:42:37,392 p=22819 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:42:37,396 p=22819 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:42:37,396 p=22819 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/08/11 07:42:37 {{ } { } [{{ } {test-dv test-oadp-187 42c6e770-dd69-40a4-b5df-1661469fdde0 72577 0 2025-08-11 07:42:15 +0000 UTC map[alerts.k8s.io/KubePersistentVolumeFillingUp:disabled app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.1] map[cdi.kubevirt.io/createdForDataVolume:2a4a503c-9420-4b45-a9c3-0c72370dd9bd cdi.kubevirt.io/storage.bind.immediate.requested:true cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:test-dv-1754898146 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-dv 2a4a503c-9420-4b45-a9c3-0c72370dd9bd 0xc0005d7627 0xc0005d7628}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-08-11 07:42:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 07:42:24 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-08-11 07:42:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.bind.immediate.requested":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:alerts.k8s.io/KubePersistentVolumeFillingUp":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4a503c-9420-4b45-a9c3-0c72370dd9bd\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 07:42:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{104857600 0} {} 100Mi BinarySI}]} pvc-0a174252-6511-43f7-887c-32dceab7e170 0xc0005c55e0 0xc0005c55f0 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-2a4a503c-9420-4b45-a9c3-0c72370dd9bd,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-2a4a503c-9420-4b45-a9c3-0c72370dd9bd,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{104857600 0} {} 100Mi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:42:37.447 2025/08/11 07:42:37 Wait until backup ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f is completed backup phase: Completed 2025/08/11 07:42:57 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 07:42:57 Run velero describe on the backup 2025/08/11 07:42:57 [./velero describe backup ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 07:42:58 Exec stderr: "" 2025/08/11 07:42:58 Name: ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-187 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 07:42:37 +0000 UTC Completed: 2025-08-11 07:42:45 +0000 UTC Expiration: 2025-09-10 07:42:37 +0000 UTC Total items to be backed up: 48 Items backed up: 48 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-187/velero-test-dv-qhcqk: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-187/velero-test-dv-qhcqk/2025-08-11T07:42:44Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-187/velero-test-dv-qhcqk volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-dda2e77c-416b-4b3c-ac85-c875cd68195a Phase: Completed Created: 2025-08-11 07:42:44 +0000 UTC Started: 2025-08-11 07:42:44 +0000 UTC Updated: 2025-08-11 07:42:44 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io - reclaimspacecronjobs.csiaddons.openshift.io authorization.openshift.io/v1/RoleBinding: - test-oadp-187/system:deployers - test-oadp-187/system:image-builders - test-oadp-187/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-187/test-dv csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - test-oadp-187/test-dv-1754898146 rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-187/system:deployers - test-oadp-187/system:image-builders - test-oadp-187/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-187/velero-test-dv-qhcqk snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-dda2e77c-416b-4b3c-ac85-c875cd68195a v1/ConfigMap: - test-oadp-187/kube-root-ca.crt - test-oadp-187/openshift-service-ca.crt v1/Event: - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68e8d7dc871 - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68e8e4ec191 - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68f8b26348f - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68fb0064f4d - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa6900aa238eb - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa6900becfca1 - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa69010388e68 - test-oadp-187/importer-prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa69010be01b8 - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68e8ca7ae1e - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68f7bf8ccae - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68f7c00e73a - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa68f8984534c - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa6908476e899 - test-oadp-187/prime-42c6e770-dd69-40a4-b5df-1661469fdde0.185aa692fa198c04 - test-oadp-187/test-dv.185aa68e8c2f6846 - test-oadp-187/test-dv.185aa68e8c43d407 - test-oadp-187/test-dv.185aa68e8ca3ac60 - test-oadp-187/test-dv.185aa68e8ca3cc1b - test-oadp-187/test-dv.185aa68e8ca701c5 - test-oadp-187/test-dv.185aa68f8f06b269 - test-oadp-187/test-dv.185aa6903f7600f5 - test-oadp-187/test-dv.185aa690857a230f - test-oadp-187/test-dv.185aa69085b4a864 - test-oadp-187/test-dv.185aa69087309eaf v1/Namespace: - test-oadp-187 v1/PersistentVolume: - pvc-0a174252-6511-43f7-887c-32dceab7e170 v1/PersistentVolumeClaim: - test-oadp-187/test-dv v1/Secret: - test-oadp-187/builder-dockercfg-tqhzl - test-oadp-187/default-dockercfg-q78zw - test-oadp-187/deployer-dockercfg-8kfsf v1/ServiceAccount: - test-oadp-187/builder - test-oadp-187/default - test-oadp-187/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-187/test-dv: Snapshot: Operation ID: test-oadp-187/velero-test-dv-qhcqk/2025-08-11T07:42:44Z Snapshot Content Name: snapcontent-dda2e77c-416b-4b3c-ac85-c875cd68195a Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-807b522a-e335-406f-b387-625a4518b398 Snapshot Size (bytes): 104857600 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f has completed successfully @ 08/11/25 07:42:58.051 2025/08/11 07:42:58 Backup for case ocp-datavolume succeeded STEP: Delete the appplication resources ocp-datavolume @ 08/11/25 07:42:58.097 STEP: Cleanup Application for case ocp-datavolume @ 08/11/25 07:42:58.097 2025/08/11 07:42:58 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/08/11 07:43:17 2025-08-11 07:42:59,628 p=23072 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:42:59,628 p=23072 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:42:59,867 p=23072 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:42:59,867 p=23072 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:00,118 p=23072 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:43:00,118 p=23072 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:00,366 p=23072 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:43:00,366 p=23072 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:00,381 p=23072 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:43:00,382 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:00,400 p=23072 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:43:00,400 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:00,413 p=23072 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:43:00,413 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:43:00,710 p=23072 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:43:00,710 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:00,736 p=23072 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:43:00,736 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:00,753 p=23072 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:43:00,753 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:00,754 p=23072 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:43:01,299 p=23072 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:43:01,299 p=23072 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:17,203 p=23072 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** 2025-08-11 07:43:17,204 p=23072 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:43:17,204 p=23072 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:17,348 p=23072 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:43:17,348 p=23072 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025/08/11 07:43:17 Creating restore ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f for case ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f STEP: Create restore ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f from backup ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:43:17.455 2025/08/11 07:43:17 Wait until restore ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f is complete restore phase: Finalizing restore phase: Completed STEP: Verify restore ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369fhas completed successfully @ 08/11/25 07:43:37.495 STEP: Verify Application restore @ 08/11/25 07:43:37.498 STEP: Verify Application deployment for case ocp-datavolume @ 08/11/25 07:43:37.498 2025/08/11 07:43:37 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 07:43:42 2025-08-11 07:43:38,972 p=23285 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:43:38,972 p=23285 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:39,234 p=23285 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:43:39,234 p=23285 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:39,497 p=23285 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:43:39,497 p=23285 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:39,765 p=23285 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:43:39,765 p=23285 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:39,779 p=23285 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:43:39,779 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:39,797 p=23285 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:43:39,797 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:39,808 p=23285 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:43:39,808 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:43:40,100 p=23285 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:43:40,100 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:40,126 p=23285 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:43:40,126 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:40,142 p=23285 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:43:40,143 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:40,144 p=23285 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:43:40,689 p=23285 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:43:40,689 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:41,584 p=23285 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** 2025-08-11 07:43:41,584 p=23285 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:43:41,584 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:42,198 p=23285 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** 2025-08-11 07:43:42,199 p=23285 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:42,204 p=23285 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:43:42,204 p=23285 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-187] [kubevirt] Backup-restore data volume @ 08/11/25 07:43:42.249 (1m36.561s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:43:42.249 2025/08/11 07:43:42 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:43:42.249 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:43:42.249 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:43:42.254 (4ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:43:42.254 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:43:42.254 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:43:42.254 2025/08/11 07:43:42 Cleaning app 2025/08/11 07:43:42 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/08/11 07:44:06 2025-08-11 07:43:43,748 p=23511 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:43:43,749 p=23511 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:44,012 p=23511 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:43:44,012 p=23511 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:44,262 p=23511 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:43:44,262 p=23511 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:44,514 p=23511 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:43:44,515 p=23511 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:43:44,528 p=23511 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:43:44,529 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:44,546 p=23511 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:43:44,546 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:44,557 p=23511 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:43:44,557 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:43:44,858 p=23511 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:43:44,858 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:44,886 p=23511 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:43:44,886 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:44,905 p=23511 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:43:44,906 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:43:44,908 p=23511 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:43:45,473 p=23511 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:43:45,473 p=23511 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:06,325 p=23511 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** 2025-08-11 07:44:06,325 p=23511 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:44:06,325 p=23511 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:06,412 p=23511 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:44:06,412 p=23511 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:44:06.458 (24.204s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:44:06.458 2025/08/11 07:44:06 Cleaning setup resources for the backup 2025/08/11 07:44:06 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:44:06 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:44:06 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:44:06.495 (38ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:44:06.495 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:44:06.505 (10ms) • [120.825 seconds] ------------------------------ S ------------------------------ Native CSI Data Mover: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode /alabama/cspi/e2e/kubevirt-plugin/backup_restore_datamover.go:129 > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:44:06.505 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:44:06.505 (0s) > Enter [It] [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode @ 08/11/25 07:44:06.505 2025/08/11 07:44:06 Delete all downloadrequest ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-18059886-b4b1-4855-b916-8dc6b0463df4 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-84f75ae7-ccb9-47d1-8a23-18fe9eedd6f6 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-abb91318-4552-4e82-b00b-9b3259f202d8 STEP: Create DPA CR @ 08/11/25 07:44:06.579 2025/08/11 07:44:06 native-datamover 2025/08/11 07:44:06 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "9e00d5b6-a597-49e1-8876-b5697de6579e", "resourceVersion": "74263", "generation": 1, "creationTimestamp": "2025-08-11T07:44:06Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:44:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:44:06.618 2025/08/11 07:44:06 Waiting for velero pod to be running 2025/08/11 07:44:06 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 07:44:06 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "9e00d5b6-a597-49e1-8876-b5697de6579e", "resourceVersion": "74263", "generation": 1, "creationTimestamp": "2025-08-11T07:44:06Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:44:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:44:11.643 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:44:11 The 'openshift-storage' namespace exists 2025/08/11 07:44:11 Checking default storage class count 2025/08/11 07:44:11 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:44:11 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:44:11 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:44:11 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:44:11 Checking for correct number of running NodeAgent pods... 2025/08/11 07:44:11 pod: node-agent-qnjqs is not yet running with status: {Pending [{PodReadyToStartContainers False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:44:06 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:44:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:44:06 +0000 UTC ContainersNotReady containers with unready status: [node-agent]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:44:06 +0000 UTC ContainersNotReady containers with unready status: [node-agent]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:44:06 +0000 UTC }] 10.0.4.228 [{10.0.4.228}] [] 2025-08-11 07:44:06 +0000 UTC [] [{node-agent {&ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc0005d77fa map[] nil [] nil []}] Burstable [] []} STEP: Installing application for case ocp-kubevirt @ 08/11/25 07:44:16.908 2025/08/11 07:44:16 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (46 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=18  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025/08/11 07:45:46 2025-08-11 07:44:18,376 p=23743 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:44:18,376 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:18,647 p=23743 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:44:18,647 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:18,891 p=23743 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:44:18,891 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:19,137 p=23743 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:44:19,137 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:19,152 p=23743 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:44:19,152 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:19,169 p=23743 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:44:19,169 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:19,182 p=23743 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:44:19,182 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:44:19,474 p=23743 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:44:19,474 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:19,500 p=23743 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:44:19,500 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:19,517 p=23743 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:44:19,517 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:19,518 p=23743 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:44:20,062 p=23743 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:44:20,062 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:44:20,870 p=23743 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-08-11 07:44:20,871 p=23743 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:44:20,871 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:21,545 p=23743 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-08-11 07:44:21,545 p=23743 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:44:22,333 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-08-11 07:44:27,927 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-08-11 07:44:33,547 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-08-11 07:44:39,193 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-08-11 07:44:44,810 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-08-11 07:44:50,429 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-08-11 07:44:56,019 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-08-11 07:45:01,690 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). 2025-08-11 07:45:07,289 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). 2025-08-11 07:45:12,878 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). 2025-08-11 07:45:18,507 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). 2025-08-11 07:45:24,190 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). 2025-08-11 07:45:29,828 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). 2025-08-11 07:45:35,466 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). 2025-08-11 07:45:41,094 p=23743 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (46 retries left). 2025-08-11 07:45:46,728 p=23743 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:45:46,729 p=23743 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:46,834 p=23743 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:45:46,834 p=23743 u=1002120000 n=ansible INFO| localhost : ok=18 changed=6 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:45:46.887 2025/08/11 07:45:46 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/08/11 07:46:08 2025-08-11 07:45:48,479 p=24181 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:45:48,479 p=24181 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:45:48,741 p=24181 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:45:48,742 p=24181 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:45:49,017 p=24181 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:45:49,017 p=24181 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:45:49,295 p=24181 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:45:49,295 p=24181 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:45:49,309 p=24181 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:45:49,309 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:49,326 p=24181 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:45:49,326 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:49,338 p=24181 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:45:49,338 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:45:49,648 p=24181 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:45:49,648 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:49,678 p=24181 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:45:49,679 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:49,698 p=24181 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:45:49,698 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:49,700 p=24181 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:45:50,264 p=24181 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:45:50,264 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:51,193 p=24181 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:45:51,193 p=24181 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:45:51,193 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:45:51,825 p=24181 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-08-11 07:45:57,476 p=24181 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-08-11 07:46:03,126 p=24181 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). 2025-08-11 07:46:08,721 p=24181 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-08-11 07:46:08,721 p=24181 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:46:08,725 p=24181 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:46:08,725 p=24181 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 STEP: Creating backup ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:46:08.781 2025/08/11 07:46:08 Wait until backup ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f is completed backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: Accepted DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: Accepted 2025/08/11 07:46:28 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "76509", "generation": 2, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: Accepted DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: Accepted 2025/08/11 07:46:48 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "76509", "generation": 2, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: Accepted DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: Accepted 2025/08/11 07:47:08 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "76509", "generation": 2, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: Accepted DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: Accepted 2025/08/11 07:47:28 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "76509", "generation": 2, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: InProgress DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: InProgress 2025/08/11 07:47:48 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "77807", "generation": 5, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:47:40Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:nodeOS": {}, "f:phase": {}, "f:progress": { "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:47:38Z", "progress": { "totalBytes": 5073010688 }, "node": "ip-10-0-114-0.ec2.internal", "nodeOS": "linux", "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh phase: InProgress DataUpload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh and status: InProgress 2025/08/11 07:48:08 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-rfzzh", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "2d6376f8-04cc-4c0b-989a-1713dab26023", "resourceVersion": "78097", "generation": 7, "creationTimestamp": "2025-08-11T07:46:15Z", "labels": { "velero.io/async-operation-id": "du-d68e568d-297c-43a3-8b28-d0d1796f0d67.ef110df6-b6df-473de5fad", "velero.io/backup-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/backup-uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "velero.io/pvc-uid": "ef110df6-b6df-4731-944f-0361518db163" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "d68e568d-297c-43a3-8b28-d0d1796f0d67", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:46:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"d68e568d-297c-43a3-8b28-d0d1796f0d67\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:48:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:nodeOS": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-rf529", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:47:38Z", "progress": { "totalBytes": 5073010688, "bytesDone": 3122921472 }, "node": "ip-10-0-114-0.ec2.internal", "nodeOS": "linux", "acceptedByNode": "ip-10-0-114-0.ec2.internal", "acceptedTimestamp": "2025-08-11T07:46:15Z" } } backup phase: Completed STEP: Verify backup ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f has completed successfully @ 08/11/25 07:48:28.953 2025/08/11 07:48:28 Backup for case ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f succeeded STEP: Delete the appplication resources ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:48:28.956 STEP: Cleanup Application for case ocp-kubevirt @ 08/11/25 07:48:28.956 2025/08/11 07:48:28 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:48:58 2025-08-11 07:48:30,429 p=24449 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:48:30,429 p=24449 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:48:30,681 p=24449 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:48:30,681 p=24449 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:48:30,925 p=24449 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:48:30,925 p=24449 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:48:31,174 p=24449 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:48:31,174 p=24449 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:48:31,189 p=24449 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:48:31,189 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:31,206 p=24449 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:48:31,206 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:31,217 p=24449 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:48:31,218 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:48:31,534 p=24449 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:48:31,535 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:31,564 p=24449 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:48:31,565 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:31,586 p=24449 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:48:31,586 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:31,589 p=24449 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:48:32,151 p=24449 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:48:32,152 p=24449 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:48:58,005 p=24449 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** 2025-08-11 07:48:58,005 p=24449 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:48:58,005 p=24449 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:48:58,173 p=24449 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:48:58,173 p=24449 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Create restore ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f from backup ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f @ 08/11/25 07:48:58.222 2025/08/11 07:48:58 Wait until restore ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f completes restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg phase: InProgress DataDownload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg and status: InProgress 2025/08/11 07:49:18 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "ea73d9b3-8229-445e-a51d-ca205801c41e", "resourceVersion": "79326", "generation": 4, "creationTimestamp": "2025-08-11T07:49:00Z", "labels": { "velero.io/async-operation-id": "dd-7b4769e2-00d6-4b83-9506-36dd044d9eeb.ef110df6-b6df-47345abbf", "velero.io/restore-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/restore-uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"7b4769e2-00d6-4b83-9506-36dd044d9eeb\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:09Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "3bb1c6d986f442650714dbc4b8c81765", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:49:09Z", "progress": {}, "node": "ip-10-0-114-0.ec2.internal", "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:49:00Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg phase: InProgress DataDownload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg and status: InProgress 2025/08/11 07:49:38 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "ea73d9b3-8229-445e-a51d-ca205801c41e", "resourceVersion": "79625", "generation": 6, "creationTimestamp": "2025-08-11T07:49:00Z", "labels": { "velero.io/async-operation-id": "dd-7b4769e2-00d6-4b83-9506-36dd044d9eeb.ef110df6-b6df-47345abbf", "velero.io/restore-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/restore-uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"7b4769e2-00d6-4b83-9506-36dd044d9eeb\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:30Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "3bb1c6d986f442650714dbc4b8c81765", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:49:09Z", "progress": { "totalBytes": 5073010688, "bytesDone": 1434648576 }, "node": "ip-10-0-114-0.ec2.internal", "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:49:00Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg phase: InProgress DataDownload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg and status: InProgress 2025/08/11 07:49:58 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "ea73d9b3-8229-445e-a51d-ca205801c41e", "resourceVersion": "79887", "generation": 8, "creationTimestamp": "2025-08-11T07:49:00Z", "labels": { "velero.io/async-operation-id": "dd-7b4769e2-00d6-4b83-9506-36dd044d9eeb.ef110df6-b6df-47345abbf", "velero.io/restore-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/restore-uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"7b4769e2-00d6-4b83-9506-36dd044d9eeb\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:50Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "3bb1c6d986f442650714dbc4b8c81765", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:49:09Z", "progress": { "totalBytes": 5073010688, "bytesDone": 2773417984 }, "node": "ip-10-0-114-0.ec2.internal", "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:49:00Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg phase: InProgress DataDownload Name: ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg and status: InProgress 2025/08/11 07:50:18 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-kj7lg", "generateName": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-", "namespace": "openshift-adp", "uid": "ea73d9b3-8229-445e-a51d-ca205801c41e", "resourceVersion": "80307", "generation": 11, "creationTimestamp": "2025-08-11T07:49:00Z", "labels": { "velero.io/async-operation-id": "dd-7b4769e2-00d6-4b83-9506-36dd044d9eeb.ef110df6-b6df-47345abbf", "velero.io/restore-name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "velero.io/restore-uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f", "uid": "7b4769e2-00d6-4b83-9506-36dd044d9eeb", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:49:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"7b4769e2-00d6-4b83-9506-36dd044d9eeb\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:50:11Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "3bb1c6d986f442650714dbc4b8c81765", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:49:09Z", "progress": { "totalBytes": 5073010688, "bytesDone": 5073010688 }, "node": "ip-10-0-114-0.ec2.internal", "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:49:00Z" } } restore phase: Completed STEP: Validate the application after restore @ 08/11/25 07:50:38.353 STEP: Verify Application deployment for case ocp-kubevirt @ 08/11/25 07:50:38.353 2025/08/11 07:50:38 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/08/11 07:51:05 2025-08-11 07:50:39,814 p=24667 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:50:39,814 p=24667 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:50:40,067 p=24667 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:50:40,068 p=24667 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:50:40,312 p=24667 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:50:40,312 p=24667 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:50:40,555 p=24667 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:50:40,555 p=24667 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:50:40,568 p=24667 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:50:40,568 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:40,585 p=24667 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:50:40,585 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:40,596 p=24667 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:50:40,596 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:50:40,883 p=24667 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:50:40,883 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:40,909 p=24667 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:50:40,909 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:40,926 p=24667 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:50:40,926 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:40,928 p=24667 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:50:41,470 p=24667 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:50:41,470 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:42,440 p=24667 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-08-11 07:50:48,055 p=24667 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-08-11 07:50:48,055 p=24667 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:50:48,055 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:50:48,724 p=24667 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-08-11 07:50:54,344 p=24667 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-08-11 07:50:59,964 p=24667 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). 2025-08-11 07:51:05,628 p=24667 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-08-11 07:51:05,628 p=24667 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:05,633 p=24667 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:51:05,633 p=24667 u=1002120000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode @ 08/11/25 07:51:05.676 (6m59.171s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:51:05.676 2025/08/11 07:51:05 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:51:05.676 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:05.676 2025/08/11 07:51:05 Cleaning app 2025/08/11 07:51:05 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:51:29 2025-08-11 07:51:07,169 p=24950 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:51:07,169 p=24950 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:07,425 p=24950 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:51:07,425 p=24950 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:07,682 p=24950 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:51:07,682 p=24950 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:07,946 p=24950 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:51:07,946 p=24950 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:07,961 p=24950 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:51:07,961 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:07,978 p=24950 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:51:07,978 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:07,989 p=24950 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:51:07,990 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:51:08,292 p=24950 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:51:08,292 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:08,318 p=24950 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:51:08,319 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:08,335 p=24950 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:51:08,336 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:08,337 p=24950 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:51:08,921 p=24950 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:51:08,922 p=24950 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:29,748 p=24950 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** 2025-08-11 07:51:29,748 p=24950 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:51:29,748 p=24950 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:29,926 p=24950 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:51:29,926 p=24950 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:29.973 (24.297s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:29.973 2025/08/11 07:51:29 Cleaning setup resources for the backup 2025/08/11 07:51:29 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:51:29 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:51:30 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:30.089 (116ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:30.089 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:51:30.099 (10ms) • [443.594 seconds] ------------------------------ [AfterSuite]  /alabama/cspi/e2e/kubevirt-plugin/kubevirt_suite_test.go:105 > Enter [AfterSuite] TOP-LEVEL @ 08/11/25 07:51:30.099 < Exit [AfterSuite] TOP-LEVEL @ 08/11/25 07:51:30.109 (10ms) [AfterSuite] PASSED [0.010 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo > Enter [ReportAfterSuite] TOP-LEVEL @ 08/11/25 07:51:30.109 < Exit [ReportAfterSuite] TOP-LEVEL @ 08/11/25 07:51:30.112 (3ms) [ReportAfterSuite] PASSED [0.003 seconds] ------------------------------ Ran 4 of 5 Specs in 929.743 seconds SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 1 Skipped PASS Ginkgo ran 1 suite in 16m10.409135049s Test Suite Passed + readonly 'RED=\e[31m' + RED='\e[31m' + readonly 'BLUE=\033[34m' + BLUE='\033[34m' + readonly 'CLEAR=\e[39m' + CLEAR='\e[39m' ++ oc get infrastructures cluster -o 'jsonpath={.status.platform}' ++ awk '{print tolower($0)}' + CLOUD_PROVIDER=aws + [[ '' == \t\r\u\e ]] + echo /home/jenkins/.kube/config /home/jenkins/.kube/config + [[ aws == *-arm* ]] + [[ aws == *-fips* ]] + E2E_TIMEOUT_MULTIPLIER=2 + export NAMESPACE=openshift-adp + NAMESPACE=openshift-adp + export PROVIDER=aws + PROVIDER=aws ++ echo aws ++ awk '{print tolower($0)}' + BACKUP_LOCATION=aws + export BACKUP_LOCATION=aws + BACKUP_LOCATION=aws + export BUCKET=ci-op-6fip6j15-interopoadp + BUCKET=ci-op-6fip6j15-interopoadp + OADP_CREDS_FILE=/tmp/test-settings/credentials + OADP_VSL_CREDS_FILE=/tmp/test-settings/aws_vsl_creds +++ readlink -f /alabama/cspi/test_settings/scripts/test_runner.sh ++ dirname /alabama/cspi/test_settings/scripts/test_runner.sh + readonly SCRIPT_DIR=/alabama/cspi/test_settings/scripts + SCRIPT_DIR=/alabama/cspi/test_settings/scripts ++ cd /alabama/cspi/test_settings/scripts ++ git rev-parse --show-toplevel + readonly TOP_DIR=/alabama/cspi + TOP_DIR=/alabama/cspi + echo /alabama/cspi /alabama/cspi + TESTS_FOLDER=/alabama/cspi/e2e ++ oc get nodes -o 'jsonpath={.items[*].metadata.labels.kubernetes\.io/arch}' ++ tr ' ' '\n' ++ sort -u ++ xargs + export NODES_ARCHITECTURE=amd64 + NODES_ARCHITECTURE=amd64 + export OADP_REPOSITORY=redhat + OADP_REPOSITORY=redhat + SKIP_DPA_CREATION=false ++ oc get ns openshift-storage ++ echo true + OPENSHIFT_STORAGE=true + '[' redhat == upstream-velero ']' + '[' true == true ']' ++ oc get sc ++ awk '$1 ~ /^.+ceph-rbd$/ {print $1}' ++ tail -1 + CEPH_RBD_STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + export CEPH_RBD_STORAGE_CLASS + echo 'ceph-rbd StorageClass found: odf-operator-ceph-rbd' ceph-rbd StorageClass found: odf-operator-ceph-rbd ++ oc get storageclass -o 'jsonpath={range .items[*]}{@.metadata.name}{" "}{@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}{"\n"}{end}' ++ awk '$2=="true"{print $1}' ++ wc -l + NUM_DEFAULT_STORAGE_CLASS=1 + '[' 1 -ne 1 ']' ++ oc get storageclass -o 'jsonpath={.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=='\''true'\'')].metadata.name}' + DEFAULT_SC=odf-operator-ceph-rbd + export STORAGE_CLASS=odf-operator-ceph-rbd + STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + '[' odf-operator-ceph-rbd '!=' odf-operator-ceph-rbd ']' + export STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + echo 'Using the StorageClass for openshift-storage: odf-operator-ceph-rbd' Using the StorageClass for openshift-storage: odf-operator-ceph-rbd + [[ amd64 != \a\m\d\6\4 ]] + TEST_FILTER='!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' + [[ aws =~ ^osp ]] + [[ aws =~ ^vsphere ]] + [[ aws =~ ^gcp-wif ]] + [[ aws =~ ^ibmcloud ]] ++ oc config current-context ++ awk -F / '{print $2}' + SETTINGS_TMP=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 + mkdir -p /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 ++ oc get authentication cluster -o 'jsonpath={.spec.serviceAccountIssuer}' + IS_OIDC= + '[' '!' -z ']' + [[ aws == \a\w\s ]] + export PROVIDER=aws + PROVIDER=aws + export CREDS_SECRET_REF=cloud-credentials + CREDS_SECRET_REF=cloud-credentials ++ oc get infrastructures cluster -o 'jsonpath={.status.platformStatus.aws.region}' --allow-missing-template-keys=false + export REGION=us-east-1 + REGION=us-east-1 + settings_script=aws_settings.sh + '[' aws == aws-sts ']' + BUCKET=ci-op-6fip6j15-interopoadp + TMP_DIR=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443 + source /alabama/cspi/test_settings/scripts/aws_settings.sh ++ cat ++ [[ aws == *aws* ]] ++ cat ++ echo -e '\n }\n}' +++ cat /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json ++ x='{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-6fip6j15-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ echo '{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-6fip6j15-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ grep -o '^[^#]*' + FILE_SETTINGS_NAME=settings.json + printf '\033[34mGenerated settings file under /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json\e[39m\n' Generated settings file under /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json + cat /alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json ++ oc get volumesnapshotclass -o name + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass annotated ++ ./e2e/must-gather/get-latest-build.sh + oc get configmaps -n default must-gather-image + UPSTREAM_VERSION=99.0.0 ++ oc get OperatorCondition -n openshift-adp -o 'jsonpath={.items[*].metadata.name}' ++ awk -F v '{print $2}' + OADP_VERSION=1.5.0 + '[' -z 1.5.0 ']' + '[' 1.5.0 == 99.0.0 ']' ++ oc get sub redhat-oadp-operator -n openshift-adp -o 'jsonpath={.spec.source}' + OADP_REPO=redhat-operators + '[' -z redhat-operators ']' + '[' redhat-operators == redhat-operators ']' + REGISTRY_PATH=registry.redhat.io/oadp/oadp-mustgather-rhel9: + TAG=1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + echo registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + '[' -z registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 ']' + export NUM_OF_OADP_INSTANCES=1 + NUM_OF_OADP_INSTANCES=1 ++ echo --focus=interop ++ tr ' ' '\n' ++ grep '^--' ++ tr '\n' ' ' + GINKO_PARAM='--focus=interop ' ++ echo --focus=interop ++ tr ' ' '\n' ++ grep '^-' ++ grep -v '^--' ++ tr '\n' ' ' + TEST_PARAM= + ginkgo run --nodes=1 -mod=mod --show-node-events --flake-attempts 3 --junit-report=/logs/artifacts/junit_oadp_interop_results.xml '--label-filter=!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' --focus=interop -p /alabama/cspi/e2e/ -- -credentials_file=/tmp/test-settings/credentials -vsl_credentials_file=/tmp/test-settings/aws_vsl_creds -oadp_namespace=openshift-adp -settings=/alabama/cspi/output_files/api-ci-op-6fip6j15-6e951-cspilp-interop-ccitredhat-com:6443/settings.json -must_gather_image=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 -timeout_multiplier=2 -skip_dpa_creation=false 2025/08/11 07:51:31 maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined 2025/08/11 07:51:41 Setting up clients 2025/08/11 07:51:41 Getting default StorageClass... 2025/08/11 07:51:41 Checking default storage class count Run the command: oc get sc 2025/08/11 07:51:41 Got default StorageClass odf-operator-ceph-rbd 2025/08/11 07:51:41 oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 65m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 65m odf-operator-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 21m odf-operator-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 21m odf-operator-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 21m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 17m 2025/08/11 07:51:41 Using velero prefix: velero-e2e-04ea7bf9-7688-11f0-aa2b-0a580a83369f 2025/08/11 07:51:41 Checking default storage class count Running Suite: OADP E2E Suite - /alabama/cspi/e2e ================================================= Random Seed: 1754898691 Will run 10 of 227 specs ------------------------------ [SynchronizedBeforeSuite]  /alabama/cspi/e2e/e2e_suite_test.go:84 > Enter [SynchronizedBeforeSuite] TOP-LEVEL @ 08/11/25 07:51:41.971 < Exit [SynchronizedBeforeSuite] TOP-LEVEL @ 08/11/25 07:51:41.971 (0s) > Enter [SynchronizedBeforeSuite] TOP-LEVEL @ 08/11/25 07:51:41.971 2025/08/11 07:51:41 The VSL credentials file: /tmp/test-settings/aws_vsl_creds doesn't exists 2025/08/11 07:51:41 The error message is: stat /tmp/test-settings/aws_vsl_creds: no such file or directory < Exit [SynchronizedBeforeSuite] TOP-LEVEL @ 08/11/25 07:51:41.988 (17ms) [SynchronizedBeforeSuite] PASSED [0.017 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Incremental backup restore tests Incremental restore pod count [tc-id:OADP-165][interop] Todolist app with CSI - policy: update /alabama/cspi/e2e/incremental_restore/backup_restore_incremental.go:94 > Enter [BeforeEach] Incremental backup restore tests @ 08/11/25 07:51:41.99 < Exit [BeforeEach] Incremental backup restore tests @ 08/11/25 07:51:41.997 (6ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:51:41.997 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:51:41.997 (0s) > Enter [It] [tc-id:OADP-165][interop] Todolist app with CSI - policy: update @ 08/11/25 07:51:41.997 2025/08/11 07:51:41 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 07:51:42.002 2025/08/11 07:51:42 csi 2025/08/11 07:51:42 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "e2c4e6d4-2737-458c-9d35-0c04a3418d54", "resourceVersion": "81743", "generation": 1, "creationTimestamp": "2025-08-11T07:51:42Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:51:42Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:51:42.031 2025/08/11 07:51:42 Waiting for velero pod to be running 2025/08/11 07:51:47 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Installing application for case todolist-backup @ 08/11/25 07:51:47.051 2025/08/11 07:51:47 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check namespace todolist-mariadb-csi-policy-update] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Create namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Ensure namespace todolist-mariadb-csi-policy-update is present] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Deploy todolist-mysql application] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check mysql pod status (30 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (29 retries left). FAILED - RETRYING: [localhost]: Check mysql pod status (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check todolist pod status (30 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (29 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (28 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (27 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until service is ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Add additional items todo list] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait for 30 seconds] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 07:52:50 2025-08-11 07:51:48,543 p=26243 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:51:48,544 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:48,785 p=26243 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:51:48,785 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:49,029 p=26243 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:51:49,029 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:49,273 p=26243 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:51:49,273 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:49,287 p=26243 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:51:49,287 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:49,305 p=26243 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:51:49,306 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:49,320 p=26243 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:51:49,321 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:51:49,611 p=26243 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:51:49,611 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:49,636 p=26243 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:51:49,637 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:49,654 p=26243 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:51:49,654 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:49,656 p=26243 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:51:50,192 p=26243 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:51:50,192 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:50,955 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check namespace todolist-mariadb-csi-policy-update] *** 2025-08-11 07:51:50,955 p=26243 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:51:50,956 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:51,313 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Create namespace todolist-mariadb-csi-policy-update] *** 2025-08-11 07:51:51,313 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:51,932 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Ensure namespace todolist-mariadb-csi-policy-update is present] *** 2025-08-11 07:51:51,933 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:51:52,912 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Deploy todolist-mysql application] *** 2025-08-11 07:51:52,913 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:51:53,612 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check mysql pod status (30 retries left). 2025-08-11 07:51:57,209 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check mysql pod status (29 retries left). 2025-08-11 07:52:00,810 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check mysql pod status (28 retries left). 2025-08-11 07:52:04,448 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod status] *** 2025-08-11 07:52:04,448 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:05,177 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (30 retries left). 2025-08-11 07:52:08,783 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (29 retries left). 2025-08-11 07:52:12,402 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (28 retries left). 2025-08-11 07:52:16,027 p=26243 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (27 retries left). 2025-08-11 07:52:19,693 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod status] *** 2025-08-11 07:52:19,693 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:20,006 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until service is ready for connections] *** 2025-08-11 07:52:20,006 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:20,364 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-08-11 07:52:20,364 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:20,732 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Add additional items todo list] *** 2025-08-11 07:52:20,732 p=26243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:20,748 p=26243 u=1002120000 n=ansible INFO| Pausing for 30 seconds 2025-08-11 07:52:50,750 p=26243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait for 30 seconds] *** 2025-08-11 07:52:50,751 p=26243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:50,771 p=26243 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:52:50,771 p=26243 u=1002120000 n=ansible INFO| localhost : ok=25 changed=9 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:52:50.817 2025/08/11 07:52:50 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/08/11 07:52:57 2025-08-11 07:52:52,249 p=26737 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:52:52,249 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:52,486 p=26737 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:52:52,487 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:52,727 p=26737 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:52:52,728 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:52,976 p=26737 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:52:52,977 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:52,991 p=26737 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:52:52,991 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:53,010 p=26737 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:52:53,010 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:53,022 p=26737 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:52:53,022 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:52:53,323 p=26737 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:52:53,323 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:53,353 p=26737 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:52:53,353 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:53,370 p=26737 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:52:53,371 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:53,372 p=26737 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:52:53,929 p=26737 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:52:53,929 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:54,138 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-08-11 07:52:54,147 p=26737 u=1002120000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-08-11 07:52:54,977 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-08-11 07:52:54,978 p=26737 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:52:54,978 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:55,299 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-08-11 07:52:55,299 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:55,948 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-08-11 07:52:55,948 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:56,282 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-08-11 07:52:56,283 p=26737 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:52:57,165 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-08-11 07:52:57,165 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:57,548 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-08-11 07:52:57,548 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:57,852 p=26737 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-08-11 07:52:57,852 p=26737 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:52:57,857 p=26737 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:52:57,857 p=26737 u=1002120000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:52:57.905 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:52:58 The 'openshift-storage' namespace exists 2025/08/11 07:52:58 Checking default storage class count 2025/08/11 07:52:58 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:52:58 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:52:58 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:52:58 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:52:58 {{ } { } [{{ } {mysql todolist-mariadb-csi-policy-update c311d6db-8cfb-469b-994c-decca59936f8 82058 0 2025-08-11 07:51:52 +0000 UTC map[app:mysql] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-1754898713 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {csi-addons-manager Update v1 2025-08-11 07:51:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{1073741824 0} {} 1Gi BinarySI}]} pvc-c311d6db-8cfb-469b-994c-decca59936f8 0xc000f8c0e0 0xc000f8c0f0 nil nil } {Bound [ReadWriteOnce] map[storage:{{1073741824 0} {} 1Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:52:58.124 2025/08/11 07:52:58 Wait until backup todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 07:53:18 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 07:53:18 Run velero describe on the backup 2025/08/11 07:53:18 [./velero describe backup todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 07:53:18 Exec stderr: "" 2025/08/11 07:53:18 Name: todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: todolist-mariadb-csi-policy-update Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 07:52:58 +0000 UTC Completed: 2025-08-11 07:53:06 +0000 UTC Expiration: 2025-09-10 07:52:58 +0000 UTC Total items to be backed up: 66 Items backed up: 66 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-hj7gl: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-hj7gl/2025-08-11T07:53:04Z Items to Update: volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-hj7gl volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-1b861719-fb56-431a-9f39-6613aef2f265 Phase: Completed Created: 2025-08-11 07:53:04 +0000 UTC Started: 2025-08-11 07:53:04 +0000 UTC Updated: 2025-08-11 07:53:05 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - reclaimspacecronjobs.csiaddons.openshift.io - securitycontextconstraints.security.openshift.io apps/v1/Deployment: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist apps/v1/ReplicaSet: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb - todolist-mariadb-csi-policy-update/todolist-6d856b79d authorization.openshift.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - todolist-mariadb-csi-policy-update/mysql-1754898712 discovery.k8s.io/v1/EndpointSlice: - todolist-mariadb-csi-policy-update/mysql-5rl9z - todolist-mariadb-csi-policy-update/todolist-s2n94 rbac.authorization.k8s.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers route.openshift.io/v1/Route: - todolist-mariadb-csi-policy-update/todolist-route security.openshift.io/v1/SecurityContextConstraints: - todolist-mariadb-csi-policy-update-scc snapshot.storage.k8s.io/v1/VolumeSnapshot: - todolist-mariadb-csi-policy-update/velero-mysql-hj7gl snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-1b861719-fb56-431a-9f39-6613aef2f265 v1/ConfigMap: - todolist-mariadb-csi-policy-update/kube-root-ca.crt - todolist-mariadb-csi-policy-update/openshift-service-ca.crt v1/Endpoints: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/Event: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714d6409605 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714df1d2e29 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714e0cb2694 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa715002a2232 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71730f7a06e - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71732760a14 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71736572443 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71736c56488 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb.185aa714d62cf646 - todolist-mariadb-csi-policy-update/mysql.185aa714d22def75 - todolist-mariadb-csi-policy-update/mysql.185aa714d22fe8b1 - todolist-mariadb-csi-policy-update/mysql.185aa714d547ad92 - todolist-mariadb-csi-policy-update/mysql.185aa714df2004dd - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa714da604581 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa715007519d4 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71501bc30e9 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa7163bbcf503 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71640087fea - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa716407e0f5a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71a8d54176a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71ada9e5a9c - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71adf1bef98 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71adf912b6d - todolist-mariadb-csi-policy-update/todolist-6d856b79d.185aa714d9c7fec5 - todolist-mariadb-csi-policy-update/todolist.185aa714d91e9981 v1/Namespace: - todolist-mariadb-csi-policy-update v1/PersistentVolume: - pvc-c311d6db-8cfb-469b-994c-decca59936f8 v1/PersistentVolumeClaim: - todolist-mariadb-csi-policy-update/mysql v1/Pod: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd v1/Secret: - todolist-mariadb-csi-policy-update/builder-dockercfg-dxrqv - todolist-mariadb-csi-policy-update/default-dockercfg-hrwlb - todolist-mariadb-csi-policy-update/deployer-dockercfg-j7m74 - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa-dockercfg-gssgx v1/Service: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/ServiceAccount: - todolist-mariadb-csi-policy-update/builder - todolist-mariadb-csi-policy-update/default - todolist-mariadb-csi-policy-update/deployer - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa Backup Volumes: Velero-Native Snapshots: CSI Snapshots: todolist-mariadb-csi-policy-update/mysql: Snapshot: Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-hj7gl/2025-08-11T07:53:04Z Snapshot Content Name: snapcontent-1b861719-fb56-431a-9f39-6613aef2f265 Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-285dbff9-a643-449a-afb6-ac300e6bc2dd Snapshot Size (bytes): 1073741824 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 07:53:18.721 2025/08/11 07:53:18 Backup for case todolist-backup succeeded STEP: Scale application @ 08/11/25 07:53:18.768 2025/08/11 07:53:18 Scaling deployment 'todolist' to 2 replicas 2025/08/11 07:53:18 Deployment updated successfully 2025/08/11 07:53:18 number of running pods: 1 2025/08/11 07:53:23 number of running pods: 1 2025/08/11 07:53:28 Application reached target number of replicas: 2 STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:53:28.829 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:53:28 The 'openshift-storage' namespace exists 2025/08/11 07:53:28 Checking default storage class count 2025/08/11 07:53:28 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:53:28 Snapclass 'example-snapclass' already exists, skip creating 2025/08/11 07:53:29 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:53:29 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:53:29 {{ } { } [{{ } {mysql todolist-mariadb-csi-policy-update c311d6db-8cfb-469b-994c-decca59936f8 83137 0 2025-08-11 07:51:52 +0000 UTC map[app:mysql] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-1754898713 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 07:51:52 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {csi-addons-manager Update v1 2025-08-11 07:51:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{1073741824 0} {} 1Gi BinarySI}]} pvc-c311d6db-8cfb-469b-994c-decca59936f8 0xc000dcdc60 0xc000dcdc70 nil nil } {Bound [ReadWriteOnce] map[storage:{{1073741824 0} {} 1Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:53:29.148 2025/08/11 07:53:29 Wait until backup todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 07:53:49 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 07:53:49 Run velero describe on the backup 2025/08/11 07:53:49 [./velero describe backup todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 07:53:49 Exec stderr: "" 2025/08/11 07:53:49 Name: todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: todolist-mariadb-csi-policy-update Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 07:53:29 +0000 UTC Completed: 2025-08-11 07:53:36 +0000 UTC Expiration: 2025-09-10 07:53:29 +0000 UTC Total items to be backed up: 82 Items backed up: 82 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-pbpcp: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-pbpcp/2025-08-11T07:53:35Z Items to Update: volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-pbpcp volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-1ba4193d-c20e-4a14-b045-962b3a0b640f Phase: Completed Created: 2025-08-11 07:53:35 +0000 UTC Started: 2025-08-11 07:53:35 +0000 UTC Updated: 2025-08-11 07:53:36 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - reclaimspacecronjobs.csiaddons.openshift.io - securitycontextconstraints.security.openshift.io apps/v1/Deployment: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist apps/v1/ReplicaSet: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb - todolist-mariadb-csi-policy-update/todolist-6d856b79d authorization.openshift.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - todolist-mariadb-csi-policy-update/mysql-1754898712 discovery.k8s.io/v1/EndpointSlice: - todolist-mariadb-csi-policy-update/mysql-5rl9z - todolist-mariadb-csi-policy-update/todolist-s2n94 rbac.authorization.k8s.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers route.openshift.io/v1/Route: - todolist-mariadb-csi-policy-update/todolist-route security.openshift.io/v1/SecurityContextConstraints: - todolist-mariadb-csi-policy-update-scc snapshot.storage.k8s.io/v1/VolumeSnapshot: - todolist-mariadb-csi-policy-update/velero-mysql-pbpcp snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-1ba4193d-c20e-4a14-b045-962b3a0b640f v1/ConfigMap: - todolist-mariadb-csi-policy-update/kube-root-ca.crt - todolist-mariadb-csi-policy-update/openshift-service-ca.crt v1/Endpoints: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/Event: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714d6409605 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714df1d2e29 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa714e0cb2694 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa715002a2232 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71730f7a06e - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71732760a14 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71736572443 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz.185aa71736c56488 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb.185aa714d62cf646 - todolist-mariadb-csi-policy-update/mysql.185aa714d22def75 - todolist-mariadb-csi-policy-update/mysql.185aa714d22fe8b1 - todolist-mariadb-csi-policy-update/mysql.185aa714d547ad92 - todolist-mariadb-csi-policy-update/mysql.185aa714df2004dd - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa714da604581 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa715007519d4 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71501bc30e9 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa7163bbcf503 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71640087fea - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa716407e0f5a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71a8d54176a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71ada9e5a9c - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71adf1bef98 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd.185aa71adf912b6d - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa728e0b13eff - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa729072334a0 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72908626035 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72a571ba8e9 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72a5bfb2ef6 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72a5c72d8ee - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72a91312ebc - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72ad7781871 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72adc4ffd34 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78.185aa72adcc0ad36 - todolist-mariadb-csi-policy-update/todolist-6d856b79d.185aa714d9c7fec5 - todolist-mariadb-csi-policy-update/todolist-6d856b79d.185aa728dfeece85 - todolist-mariadb-csi-policy-update/todolist.185aa714d91e9981 - todolist-mariadb-csi-policy-update/todolist.185aa728de9b00c6 - todolist-mariadb-csi-policy-update/velero-mysql-hj7gl.185aa7247ab82692 - todolist-mariadb-csi-policy-update/velero-mysql-hj7gl.185aa724e16b730f - todolist-mariadb-csi-policy-update/velero-mysql-hj7gl.185aa724e16be173 v1/Namespace: - todolist-mariadb-csi-policy-update v1/PersistentVolume: - pvc-c311d6db-8cfb-469b-994c-decca59936f8 v1/PersistentVolumeClaim: - todolist-mariadb-csi-policy-update/mysql v1/Pod: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-rmfvz - todolist-mariadb-csi-policy-update/todolist-6d856b79d-lxdmd - todolist-mariadb-csi-policy-update/todolist-6d856b79d-vkc78 v1/Secret: - todolist-mariadb-csi-policy-update/builder-dockercfg-dxrqv - todolist-mariadb-csi-policy-update/default-dockercfg-hrwlb - todolist-mariadb-csi-policy-update/deployer-dockercfg-j7m74 - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa-dockercfg-gssgx v1/Service: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/ServiceAccount: - todolist-mariadb-csi-policy-update/builder - todolist-mariadb-csi-policy-update/default - todolist-mariadb-csi-policy-update/deployer - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa Backup Volumes: Velero-Native Snapshots: CSI Snapshots: todolist-mariadb-csi-policy-update/mysql: Snapshot: Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-pbpcp/2025-08-11T07:53:35Z Snapshot Content Name: snapcontent-1ba4193d-c20e-4a14-b045-962b3a0b640f Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-3986dbcb-50a7-42b0-b211-e9f66084442e Snapshot Size (bytes): 1073741824 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 07:53:49.66 2025/08/11 07:53:49 Backup for case todolist-backup succeeded STEP: Cleanup application and restore 1st backup @ 08/11/25 07:53:49.718 STEP: Delete the appplication resources todolist-backup @ 08/11/25 07:53:49.718 STEP: Cleanup Application for case todolist-backup @ 08/11/25 07:53:49.718 2025/08/11 07:53:49 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025/08/11 07:54:19 2025-08-11 07:53:51,169 p=27122 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:53:51,169 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:53:51,407 p=27122 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:53:51,408 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:53:51,648 p=27122 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:53:51,648 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:53:51,902 p=27122 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:53:51,902 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:53:51,916 p=27122 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:53:51,916 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:53:51,933 p=27122 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:53:51,933 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:53:51,944 p=27122 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:53:51,945 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:53:52,242 p=27122 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:53:52,242 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:53:52,269 p=27122 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:53:52,269 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:53:52,285 p=27122 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:53:52,286 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:53:52,287 p=27122 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:53:52,829 p=27122 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:53:52,829 p=27122 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:18,617 p=27122 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** 2025-08-11 07:54:18,617 p=27122 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:54:18,617 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:19,496 p=27122 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** 2025-08-11 07:54:19,496 p=27122 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:19,665 p=27122 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:54:19,665 p=27122 u=1002120000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025/08/11 07:54:19 Creating restore todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f for case todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f STEP: Create restore todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f from backup todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:54:19.709 2025/08/11 07:54:19 Wait until restore todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f is complete restore phase: Finalizing restore phase: Finalizing restore phase: Completed STEP: Verify restore todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 07:54:49.753 STEP: Verify Application restore @ 08/11/25 07:54:49.757 STEP: Verify Application deployment for case todolist-backup @ 08/11/25 07:54:49.757 2025/08/11 07:54:49 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/08/11 07:54:57 2025-08-11 07:54:51,293 p=27356 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:54:51,293 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:51,561 p=27356 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:54:51,561 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:51,805 p=27356 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:54:51,805 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:52,050 p=27356 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:54:52,050 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:52,065 p=27356 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:54:52,065 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:52,082 p=27356 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:54:52,082 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:52,094 p=27356 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:54:52,094 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:54:52,439 p=27356 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:54:52,440 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:52,469 p=27356 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:54:52,469 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:52,489 p=27356 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:54:52,489 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:52,491 p=27356 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:54:53,051 p=27356 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:54:53,051 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:53,258 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-08-11 07:54:53,266 p=27356 u=1002120000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-08-11 07:54:54,073 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-08-11 07:54:54,073 p=27356 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:54:54,073 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:54,372 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-08-11 07:54:54,372 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:55,098 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-08-11 07:54:55,098 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:55,401 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-08-11 07:54:55,401 p=27356 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:54:56,248 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-08-11 07:54:56,248 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:56,651 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-08-11 07:54:56,651 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:56,966 p=27356 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-08-11 07:54:56,966 p=27356 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:54:56,972 p=27356 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:54:56,972 p=27356 u=1002120000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025/08/11 07:54:57 Application reached target number of replicas: 1 STEP: Restore 2nd backup with existingRessourcePolicy: update @ 08/11/25 07:54:57.027 2025/08/11 07:54:57 Creating restore todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f for case todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f STEP: Create restore todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f from backup todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:54:57.027 2025/08/11 07:54:57 Wait until restore todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f is complete restore phase: Completed STEP: Verify restore todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 07:55:07.074 STEP: Verify Application restore @ 08/11/25 07:55:07.078 STEP: Verify Application deployment for case todolist-backup @ 08/11/25 07:55:07.078 2025/08/11 07:55:07 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/08/11 07:55:14 2025-08-11 07:55:08,506 p=27700 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:55:08,506 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:08,762 p=27700 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:55:08,762 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:09,007 p=27700 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:55:09,007 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:09,270 p=27700 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:55:09,270 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:09,285 p=27700 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:55:09,285 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:09,302 p=27700 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:55:09,302 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:09,313 p=27700 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:55:09,314 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:55:09,614 p=27700 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:55:09,614 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:09,642 p=27700 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:55:09,642 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:09,659 p=27700 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:55:09,659 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:09,660 p=27700 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:55:10,206 p=27700 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:55:10,206 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:10,431 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-08-11 07:55:10,440 p=27700 u=1002120000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-08-11 07:55:11,248 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-08-11 07:55:11,248 p=27700 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:55:11,248 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:11,560 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-08-11 07:55:11,560 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:12,251 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-08-11 07:55:12,252 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:12,563 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-08-11 07:55:12,563 p=27700 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:13,433 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-08-11 07:55:13,434 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:13,844 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-08-11 07:55:13,845 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:14,127 p=27700 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-08-11 07:55:14,127 p=27700 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:14,132 p=27700 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:55:14,132 p=27700 u=1002120000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025/08/11 07:55:14 Application reached target number of replicas: 2 < Exit [It] [tc-id:OADP-165][interop] Todolist app with CSI - policy: update @ 08/11/25 07:55:14.189 (3m32.192s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:55:14.189 2025/08/11 07:55:14 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:55:14.189 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.189 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.193 (4ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.193 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.196 (4ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.197 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.197 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.197 2025/08/11 07:55:14 Cleaning setup resources for the backup 2025/08/11 07:55:14 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:55:14 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:55:14 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.215 (18ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.215 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.215 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.215 2025/08/11 07:55:14 Cleaning setup resources for the backup 2025/08/11 07:55:14 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:55:14 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.308 (93ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:14.308 2025/08/11 07:55:14 Cleaning app 2025/08/11 07:55:14 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025/08/11 07:55:44 2025-08-11 07:55:15,751 p=28046 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:55:15,752 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:15,998 p=28046 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:55:15,999 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:16,266 p=28046 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:55:16,266 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:16,518 p=28046 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:55:16,518 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:16,532 p=28046 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:55:16,533 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:16,550 p=28046 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:55:16,550 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:16,561 p=28046 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:55:16,562 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:55:16,869 p=28046 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:55:16,870 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:16,901 p=28046 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:55:16,901 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:16,923 p=28046 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:55:16,923 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:16,925 p=28046 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:55:17,470 p=28046 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:55:17,470 p=28046 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:43,269 p=28046 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** 2025-08-11 07:55:43,270 p=28046 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:55:43,270 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:44,193 p=28046 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** 2025-08-11 07:55:44,193 p=28046 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:44,388 p=28046 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:55:44,388 p=28046 u=1002120000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:44.43 (30.123s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:44.431 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 08/11/25 07:55:44.438 (7ms) • [242.447 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [datamover] DataMover: Backup/Restore stateful application with CSI  [tc-id:OADP-439][interop] MySQL application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:114 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 07:55:44.438 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 07:55:44.446 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:55:44.446 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:55:44.446 (0s) > Enter [It] [tc-id:OADP-439][interop] MySQL application @ 08/11/25 07:55:44.446 2025/08/11 07:55:44 Delete all downloadrequest todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-2742e0b9-6c45-49d1-b797-e7c27dfe6cea todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-699fef7d-2936-45d5-8839-9507e4cad65d todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-8eed795a-0eaa-4a97-91b0-a7662963220c todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-103af7b5-de64-4a2f-a924-c21a8a0ab528 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-609b8d89-fb23-4984-9988-0cba60fa336e todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-921f3496-9b73-40f7-b302-520c1f2d8861 STEP: Create DPA CR @ 08/11/25 07:55:44.533 2025/08/11 07:55:44 native-datamover 2025/08/11 07:55:44 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "66e2c792-00cb-4e8f-98fb-954d54fc98d7", "resourceVersion": "86067", "generation": 1, "creationTimestamp": "2025-08-11T07:55:44Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:55:44Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:55:44.559 2025/08/11 07:55:44 Waiting for velero pod to be running 2025/08/11 07:55:44 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 07:55:44 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "66e2c792-00cb-4e8f-98fb-954d54fc98d7", "resourceVersion": "86067", "generation": 1, "creationTimestamp": "2025-08-11T07:55:44Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:55:44Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:55:49.578 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:55:49 The 'openshift-storage' namespace exists 2025/08/11 07:55:49 Checking default storage class count 2025/08/11 07:55:49 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:55:49 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:55:49 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:55:49 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:55:49 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 08/11/25 07:55:49.811 2025/08/11 07:55:49 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-439] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 07:56:45 2025-08-11 07:55:51,265 p=28298 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:55:51,265 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:51,499 p=28298 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:55:51,499 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:51,737 p=28298 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:55:51,737 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:52,009 p=28298 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:55:52,009 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:52,025 p=28298 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:55:52,025 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:52,046 p=28298 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:55:52,046 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:52,059 p=28298 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:55:52,059 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:55:52,351 p=28298 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:55:52,351 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:52,377 p=28298 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:55:52,378 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:52,394 p=28298 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:55:52,394 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:52,396 p=28298 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:55:52,941 p=28298 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:55:52,941 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:53,783 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-439] *** 2025-08-11 07:55:53,784 p=28298 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:55:53,784 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:55:54,151 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-08-11 07:55:54,151 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:55,054 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-08-11 07:55:55,055 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:55:55,688 p=28298 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-08-11 07:56:01,268 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-08-11 07:56:01,269 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:01,715 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-08-11 07:56:01,715 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:02,007 p=28298 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-08-11 07:56:07,280 p=28298 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-08-11 07:56:12,552 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 07:56:12,553 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:14,321 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-08-11 07:56:14,321 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:15,066 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-08-11 07:56:15,067 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:15,626 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-08-11 07:56:15,627 p=28298 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:15,643 p=28298 u=1002120000 n=ansible INFO| Pausing for 30 seconds 2025-08-11 07:56:45,645 p=28298 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-08-11 07:56:45,645 p=28298 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:45,750 p=28298 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:56:45,750 p=28298 u=1002120000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 07:56:45.795 2025/08/11 07:56:45 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 07:56:51 2025-08-11 07:56:47,400 p=28822 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:56:47,401 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:47,703 p=28822 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:56:47,703 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:47,972 p=28822 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:56:47,972 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:48,223 p=28822 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:56:48,223 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:48,236 p=28822 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:56:48,237 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:48,253 p=28822 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:56:48,253 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:48,264 p=28822 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:56:48,264 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:56:48,591 p=28822 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:56:48,591 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:48,620 p=28822 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:56:48,620 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:48,638 p=28822 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:56:48,638 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:48,640 p=28822 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:56:49,198 p=28822 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:56:49,198 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:50,153 p=28822 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 07:56:50,153 p=28822 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:56:50,563 p=28822 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 07:56:50,563 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:50,958 p=28822 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 07:56:50,958 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:51,536 p=28822 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 07:56:51,537 p=28822 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:56:51,541 p=28822 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:56:51,541 p=28822 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 STEP: Creating backup mysql-9582604b-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:56:51.604 2025/08/11 07:56:51 Wait until backup mysql-9582604b-7688-11f0-aa2b-0a580a83369f is completed backup phase: WaitingForPluginOperations DataUpload mysql-9582604b-7688-11f0-aa2b-0a580a83369f-x2c9g phase: InProgress DataUpload Name: mysql-9582604b-7688-11f0-aa2b-0a580a83369f-x2c9g and status: InProgress 2025/08/11 07:57:11 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f-x2c9g", "generateName": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f-", "namespace": "openshift-adp", "uid": "c9c3f7b1-c722-4984-8f81-1119b3eff66c", "resourceVersion": "87682", "generation": 4, "creationTimestamp": "2025-08-11T07:56:58Z", "labels": { "velero.io/async-operation-id": "du-ea7f862c-7f55-480a-bb66-b6aa20cee472.1bb8d124-4478-4f6e0919a", "velero.io/backup-name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f", "velero.io/backup-uid": "ea7f862c-7f55-480a-bb66-b6aa20cee472", "velero.io/pvc-uid": "1bb8d124-4478-4f6c-9cd8-515e72fbcfa1" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f", "uid": "ea7f862c-7f55-480a-bb66-b6aa20cee472", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:56:58Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"ea7f862c-7f55-480a-bb66-b6aa20cee472\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:57:09Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:nodeOS": {}, "f:phase": {}, "f:startTimestamp": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-mysql-data-nx82j", "storageClass": "odf-operator-ceph-rbd", "snapshotClass": "example-snapclass" }, "sourcePVC": "mysql-data", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-439", "operationTimeout": "10m0s" }, "status": { "phase": "InProgress", "startTimestamp": "2025-08-11T07:57:09Z", "progress": {}, "node": "ip-10-0-114-0.ec2.internal", "nodeOS": "linux", "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:56:58Z" } } DataUpload mysql-9582604b-7688-11f0-aa2b-0a580a83369f-568xn phase: Accepted DataUpload Name: mysql-9582604b-7688-11f0-aa2b-0a580a83369f-568xn and status: Accepted 2025/08/11 07:57:11 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f-568xn", "generateName": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f-", "namespace": "openshift-adp", "uid": "27db65ee-ab04-4670-81f9-6d08366c0d6a", "resourceVersion": "87514", "generation": 2, "creationTimestamp": "2025-08-11T07:57:03Z", "labels": { "velero.io/async-operation-id": "du-ea7f862c-7f55-480a-bb66-b6aa20cee472.aeb4fc48-825e-429cda5bc", "velero.io/backup-name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f", "velero.io/backup-uid": "ea7f862c-7f55-480a-bb66-b6aa20cee472", "velero.io/pvc-uid": "aeb4fc48-825e-4290-9251-462c68cbe72e" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "mysql-9582604b-7688-11f0-aa2b-0a580a83369f", "uid": "ea7f862c-7f55-480a-bb66-b6aa20cee472", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:57:03Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-08-11T07:57:03Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"ea7f862c-7f55-480a-bb66-b6aa20cee472\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-mysql-data1-p8rd7", "storageClass": "odf-operator-ceph-rbd", "snapshotClass": "example-snapclass" }, "sourcePVC": "mysql-data1", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-439", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-4-228.ec2.internal", "acceptedTimestamp": "2025-08-11T07:57:03Z" } } backup phase: Completed STEP: Verify backup mysql-9582604b-7688-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 07:57:31.677 2025/08/11 07:57:31 Backup for case mysql-9582604b-7688-11f0-aa2b-0a580a83369f succeeded STEP: Delete the appplication resources mysql-9582604b-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:57:31.682 STEP: Cleanup Application for case mysql @ 08/11/25 07:57:31.682 2025/08/11 07:57:31 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 07:58:01 2025-08-11 07:57:33,343 p=29145 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:57:33,343 p=29145 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:57:33,633 p=29145 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:57:33,633 p=29145 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:57:33,948 p=29145 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:57:33,948 p=29145 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:57:34,218 p=29145 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:57:34,218 p=29145 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:57:34,235 p=29145 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:57:34,235 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:57:34,254 p=29145 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:57:34,255 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:57:34,267 p=29145 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:57:34,268 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:57:34,634 p=29145 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:57:34,635 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:57:34,662 p=29145 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:57:34,663 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:57:34,681 p=29145 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:57:34,681 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:57:34,683 p=29145 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:57:35,283 p=29145 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:57:35,283 p=29145 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:01,163 p=29145 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** 2025-08-11 07:58:01,163 p=29145 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:58:01,164 p=29145 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:01,507 p=29145 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:58:01,508 p=29145 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 STEP: Create restore mysql-9582604b-7688-11f0-aa2b-0a580a83369f from backup mysql-9582604b-7688-11f0-aa2b-0a580a83369f @ 08/11/25 07:58:01.562 2025/08/11 07:58:01 Wait until restore mysql-9582604b-7688-11f0-aa2b-0a580a83369f completes restore phase: WaitingForPluginOperations DataDownload mysql-9582604b-7688-11f0-aa2b-0a580a83369f-85wsk phase: Completed DataDownload mysql-9582604b-7688-11f0-aa2b-0a580a83369f-52xf9 phase: Completed restore phase: Completed STEP: Validate the application after restore @ 08/11/25 07:58:41.608 STEP: Verify Application deployment for case mysql @ 08/11/25 07:58:41.608 2025/08/11 07:58:41 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 07:58:47 2025-08-11 07:58:43,156 p=29366 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:58:43,156 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:43,424 p=29366 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:58:43,424 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:43,677 p=29366 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:58:43,678 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:43,959 p=29366 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:58:43,959 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:43,974 p=29366 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:58:43,974 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:43,992 p=29366 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:58:43,992 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:44,005 p=29366 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:58:44,005 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:58:44,329 p=29366 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:58:44,329 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:44,358 p=29366 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:58:44,358 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:44,376 p=29366 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:58:44,376 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:44,378 p=29366 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:58:44,968 p=29366 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:58:44,969 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:45,982 p=29366 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 07:58:45,982 p=29366 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:46,422 p=29366 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 07:58:46,423 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:46,801 p=29366 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 07:58:46,801 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:47,413 p=29366 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 07:58:47,413 p=29366 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:47,417 p=29366 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:58:47,418 p=29366 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-439][interop] MySQL application @ 08/11/25 07:58:47.467 (3m3.021s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 07:58:47.467 2025/08/11 07:58:47 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 07:58:47.467 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:58:47.467 2025/08/11 07:58:47 Cleaning app 2025/08/11 07:58:47 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 07:59:16 2025-08-11 07:58:48,987 p=29687 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:58:48,987 p=29687 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:49,246 p=29687 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:58:49,247 p=29687 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:49,491 p=29687 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:58:49,491 p=29687 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:49,746 p=29687 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:58:49,746 p=29687 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:58:49,761 p=29687 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:58:49,761 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:49,783 p=29687 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:58:49,783 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:49,798 p=29687 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:58:49,798 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:58:50,133 p=29687 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:58:50,133 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:50,163 p=29687 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:58:50,164 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:50,185 p=29687 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:58:50,185 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:58:50,187 p=29687 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:58:50,746 p=29687 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:58:50,746 p=29687 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:16,597 p=29687 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** 2025-08-11 07:59:16,598 p=29687 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:59:16,598 p=29687 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:16,940 p=29687 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 07:59:16,940 p=29687 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:59:16.997 (29.53s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:59:16.997 2025/08/11 07:59:16 Cleaning setup resources for the backup 2025/08/11 07:59:16 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:59:16 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:59:17 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:59:17.038 (41ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:59:17.038 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 07:59:17.048 (10ms) • [212.610 seconds] ------------------------------ [datamover] DataMover: Backup/Restore stateful application with CSI  [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:130 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 07:59:17.048 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 07:59:17.066 (18ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:59:17.066 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 07:59:17.066 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 07:59:17.066 2025/08/11 07:59:17 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 07:59:17.077 2025/08/11 07:59:17 native-datamover 2025/08/11 07:59:17 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "b75dd6b5-578b-448f-b47b-449ecd330143", "resourceVersion": "89958", "generation": 1, "creationTimestamp": "2025-08-11T07:59:17Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T07:59:17Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 07:59:17.265 2025/08/11 07:59:17 Waiting for velero pod to be running 2025/08/11 07:59:17 pod: velero-d48b7f4b-rwwbz is not yet running with status: {Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-08-11 07:59:17 +0000 UTC }] [] [] [] [] Burstable [] []} 2025/08/11 07:59:22 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 07:59:22.31 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 07:59:22 The 'openshift-storage' namespace exists 2025/08/11 07:59:22 Checking default storage class count 2025/08/11 07:59:22 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 07:59:22 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 07:59:22 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 07:59:22 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 07:59:22 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 08/11/25 07:59:22.581 2025/08/11 07:59:22 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.161261", "end": "2025-08-11 08:02:32.398386", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:02:32.237125", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:02:32 2025-08-11 07:59:24,452 p=29931 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 07:59:24,452 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:24,750 p=29931 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 07:59:24,750 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:25,105 p=29931 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 07:59:25,105 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:25,396 p=29931 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 07:59:25,396 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:25,413 p=29931 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 07:59:25,413 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:25,434 p=29931 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 07:59:25,435 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:25,449 p=29931 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 07:59:25,450 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 07:59:25,823 p=29931 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 07:59:25,823 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:25,852 p=29931 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 07:59:25,853 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:25,875 p=29931 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 07:59:25,875 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:25,877 p=29931 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 07:59:26,512 p=29931 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 07:59:26,512 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:27,415 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 07:59:27,415 p=29931 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 07:59:27,415 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:27,865 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 07:59:27,865 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:28,188 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 07:59:28,188 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:29,023 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 07:59:29,023 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:29,720 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 07:59:29,721 p=29931 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 07:59:29,721 p=29931 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 07:59:30,484 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 07:59:36,137 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-08-11 07:59:41,840 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 07:59:41,841 p=29931 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 07:59:42,347 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 07:59:49,244 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 07:59:54,594 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 07:59:59,909 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:00:09,044 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:00:14,384 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:00:19,701 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:00:25,039 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:00:30,385 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:00:35,701 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:00:43,851 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:00:49,202 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:00:54,528 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:00:59,854 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:01:05,196 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:01:10,523 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:01:15,844 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:01:21,168 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:01:26,530 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:01:33,549 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:01:38,866 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:01:44,207 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:01:49,540 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:01:54,889 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:02:00,253 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:02:05,628 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:02:10,930 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:02:16,302 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:02:21,690 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:02:27,074 p=29931 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:02:32,427 p=29931 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:02:32,427 p=29931 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.161261", "end": "2025-08-11 08:02:32.398386", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:02:32.237125", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:02:32,428 p=29931 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:02:32,428 p=29931 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/08/11 08:02:32 LAST SEEN TYPE REASON OBJECT MESSAGE 3m2s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m2s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m2s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-114-0.ec2.internal 3m2s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-5b4ba0f7-71e5-480d-aaeb-cc03f98fb6ba" 3m Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.121/23] from ovn-kubernetes 65s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m57s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.466s (3.466s including waiting). Image size: 307783610 bytes. 65s Normal Created pod/cassandra-0 Created container: cassandra 65s Normal Started pod/cassandra-0 Started container cassandra 2m49s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 855ms (855ms including waiting). Image size: 307783610 bytes. 7s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(c86148da-7472-4930-a91c-bb08fc6827b7) 2m30s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 680ms (680ms including waiting). Image size: 307783610 bytes. 114s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 892ms (892ms including waiting). Image size: 307783610 bytes. 65s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 537ms (537ms including waiting). Image size: 307783610 bytes. 2m56s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m56s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-60-252.ec2.internal 2m55s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-b94f0fd8-77ba-4f3d-af09-e069b1e7ac19" 2m51s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.74/23] from ovn-kubernetes 52s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m47s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.301s (3.301s including waiting). Image size: 307783610 bytes. 51s Normal Created pod/cassandra-1 Created container: cassandra 51s Normal Started pod/cassandra-1 Started container cassandra 2m39s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 603ms (603ms including waiting). Image size: 307783610 bytes. 6s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(eedc1585-300b-42ec-97a9-c920de3d3e2b) 2m19s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 829ms (829ms including waiting). Image size: 307783610 bytes. 108s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 822ms (822ms including waiting). Image size: 307783610 bytes. 51s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 999ms (999ms including waiting). Image size: 307783610 bytes. 2m46s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m46s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m46s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-4-228.ec2.internal 2m46s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-c53478e6-1661-4cf4-b5f5-8c515acc291a" 2m36s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.52/23] from ovn-kubernetes 32s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m33s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.16s (3.16s including waiting). Image size: 307783610 bytes. 31s Normal Created pod/cassandra-2 Created container: cassandra 31s Normal Started pod/cassandra-2 Started container cassandra 2m24s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 681ms (681ms including waiting). Image size: 307783610 bytes. 1s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(bc25ad36-28d4-4846-aef6-1033377ed478) 2m4s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 857ms (857ms including waiting). Image size: 307783610 bytes. 87s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 1.43s (1.43s including waiting). Image size: 307783610 bytes. 31s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 879ms (879ms including waiting). Image size: 307783610 bytes. 3m3s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m3s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m3s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-5b4ba0f7-71e5-480d-aaeb-cc03f98fb6ba 2m56s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 2m56s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m56s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-b94f0fd8-77ba-4f3d-af09-e069b1e7ac19 2m47s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m47s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m47s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-c53478e6-1661-4cf4-b5f5-8c515acc291a 3m3s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m3s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m56s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m56s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m47s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m47s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:02:32.625 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 08:02:32.625 (3m15.559s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:02:32.625 2025/08/11 08:02:32 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:02:32.625 2025/08/11 08:02:32 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:02:32.625 2025/08/11 08:02:32 The folder logs does not exists, creating new folder with the name: logs STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:02:32.625 2025/08/11 08:02:32 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application does not exists, creating new folder with the name: logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Run must-gather because the spec failed @ 08/11/25 08:02:32.625 2025/08/11 08:02:32 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:02:32 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:03:31 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:03:31 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:03:31.886 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:03:31.886 (59.261s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:03:31.886 2025/08/11 08:03:31 Cleaning app 2025/08/11 08:03:31 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:04:01 2025-08-11 08:03:33,626 p=31332 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:03:33,627 p=31332 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:03:33,931 p=31332 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:03:33,932 p=31332 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:03:34,237 p=31332 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:03:34,238 p=31332 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:03:34,551 p=31332 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:03:34,551 p=31332 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:03:34,565 p=31332 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:03:34,565 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:03:34,586 p=31332 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:03:34,586 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:03:34,599 p=31332 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:03:34,599 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:03:34,929 p=31332 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:03:34,929 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:03:34,958 p=31332 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:03:34,958 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:03:34,978 p=31332 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:03:34,978 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:03:34,980 p=31332 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:03:35,620 p=31332 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:03:35,620 p=31332 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:01,520 p=31332 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-08-11 08:04:01,520 p=31332 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:04:01,520 p=31332 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:01,899 p=31332 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:04:01,900 p=31332 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:04:01.946 (30.06s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:04:01.946 2025/08/11 08:04:01 Cleaning setup resources for the backup 2025/08/11 08:04:01 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:04:01 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:04:01 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:04:01.976 (30ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:04:01.976 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:04:01.982 (7ms) Attempt #1 Failed. Retrying ↺ @ 08/11/25 08:04:01.982 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 08:04:01.982 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 08:04:01.99 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:04:01.99 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:04:01.99 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 08:04:01.99 2025/08/11 08:04:01 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-5a2704f4-caf1-4fbe-af82-63dc59232d8f mysql-9582604b-7688-11f0-aa2b-0a580a83369f-62078929-b50f-450f-8bb2-e3e59e4f049a mysql-9582604b-7688-11f0-aa2b-0a580a83369f-8186800c-b3ea-401b-84b1-c4ec94462f50 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-929df634-5917-4f7e-b1c6-d39558768bdc mysql-9582604b-7688-11f0-aa2b-0a580a83369f-c6bb6bd8-aab4-482c-beaf-8231a5ef0877 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e5e50785-0072-49fc-896a-6d06f0abc73c mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e927c237-b400-4e33-b4a5-848b2dd895f1 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e9784ae1-4d8a-4a96-bd25-6c258eb9dfb2 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-ffbe7e81-e321-4d85-973e-9205ccab09c8 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-111a5c85-b3d2-434c-b692-b6601025b51c ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-17f1e537-aac7-439c-9c9b-40a68a062324 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-2bd5185f-b4dc-4e6d-8e4c-30ce3a1f9076 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-7d794f0a-6a6a-4192-a086-cee01bc85826 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-4e19d212-3acd-422f-afed-5e3186af77b0 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-690db584-ce1b-4f1e-a593-d4691f92296a ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-3853ea42-1e13-400b-acc1-bbae91fb887a ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-9721daf7-6087-47a0-9343-19185e47516d todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-0152bbae-e40d-4217-9c49-2a1f19ece535 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-0c960402-3cf0-40f7-acee-f860c6dab576 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-403faf01-befa-4a12-8ca9-cc3cfe627952 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-47697448-f5d9-4ff3-81e9-0848f534faa4 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-a5d4ec5e-f1dc-4da8-914a-55f1b4acfcfa todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-a7a85080-7bb5-4c9c-8660-c2604c84143c todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-da7a30e9-0589-4b00-9440-46800cf9e2d3 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-ecd13161-7319-427d-942e-78fd1aba41f9 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-0882224b-58fe-4f25-b06e-29293c5317a6 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-26e12271-1e05-4430-a9c6-e224bb20e1c2 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-2ea62c01-2418-4238-8d69-a5a8bd8314ce todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-5d4c4e8e-46cc-4a4f-b6fd-628db60a5fc8 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-5ed2d6fa-3573-4013-ad42-b18e075f0db7 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-91ca8da6-ee5a-41c7-b371-41d5098901cf todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-b461b1c9-b895-47a9-8167-e63af2b07b16 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-c882d571-6d03-41db-91fb-f45001226c1d STEP: Create DPA CR @ 08/11/25 08:04:06.605 2025/08/11 08:04:06 native-datamover 2025/08/11 08:04:06 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "4873b3e8-de5c-4031-93f5-e92a3daa7c98", "resourceVersion": "95169", "generation": 1, "creationTimestamp": "2025-08-11T08:04:06Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:04:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:04:06.63 2025/08/11 08:04:06 Waiting for velero pod to be running 2025/08/11 08:04:11 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:04:11.648 2025/08/11 08:04:11 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 08:04:11 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:04:11 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:04:11 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 08/11/25 08:04:11.78 2025/08/11 08:04:11 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). FAILED - RETRYING: [localhost]: Check pods status (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.141231", "end": "2025-08-11 08:07:26.306267", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:07:26.165036", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:07:26 2025-08-11 08:04:14,340 p=31560 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:04:14,341 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:14,603 p=31560 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:04:14,603 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:14,873 p=31560 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:04:14,873 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:15,153 p=31560 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:04:15,154 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:15,168 p=31560 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:04:15,169 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:15,188 p=31560 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:04:15,188 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:15,200 p=31560 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:04:15,200 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:04:15,531 p=31560 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:04:15,531 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:15,560 p=31560 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:04:15,560 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:15,579 p=31560 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:04:15,579 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:15,581 p=31560 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:04:16,143 p=31560 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:04:16,143 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:16,996 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 08:04:16,997 p=31560 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:04:16,997 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:17,367 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 08:04:17,367 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:17,686 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 08:04:17,686 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:18,470 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 08:04:18,470 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:19,156 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 08:04:19,156 p=31560 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 08:04:19,156 p=31560 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:04:19,830 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 08:04:25,437 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-08-11 08:04:31,085 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (28 retries left). 2025-08-11 08:04:36,754 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 08:04:36,754 p=31560 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:04:37,251 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 08:04:45,049 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 08:04:50,366 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 08:04:55,731 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:05:04,948 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:05:10,315 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:05:15,670 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:05:20,989 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:05:26,337 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:05:31,667 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:05:38,445 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:05:43,807 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:05:49,138 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:05:54,444 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:05:59,781 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:06:05,117 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:06:10,451 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:06:15,763 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:06:21,082 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:06:26,417 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:06:33,042 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:06:38,351 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:06:43,693 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:06:49,028 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:06:54,351 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:06:59,689 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:07:05,025 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:07:10,352 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:07:15,676 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:07:21,008 p=31560 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:07:26,326 p=31560 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:07:26,326 p=31560 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.141231", "end": "2025-08-11 08:07:26.306267", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:07:26.165036", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:07:26,327 p=31560 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:07:26,327 p=31560 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/08/11 08:07:26 LAST SEEN TYPE REASON OBJECT MESSAGE 3m7s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m7s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m7s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-114-0.ec2.internal 3m7s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-76d322d9-34ad-47bf-9b97-021f9f6343c2" 2m56s Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.126/23] from ovn-kubernetes 60s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m56s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 697ms (697ms including waiting). Image size: 307783610 bytes. 59s Normal Created pod/cassandra-0 Created container: cassandra 59s Normal Started pod/cassandra-0 Started container cassandra 2m48s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 764ms (764ms including waiting). Image size: 307783610 bytes. 0s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(9d4c43f9-65af-433a-8786-5c4615ddd9b7) 2m27s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 598ms (598ms including waiting). Image size: 307783610 bytes. 115s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 610ms (610ms including waiting). Image size: 307783610 bytes. 59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 685ms (685ms including waiting). Image size: 307783610 bytes. 2m55s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-60-252.ec2.internal 2m55s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-889ee5c0-e4e7-418a-ab9f-b227e11d0580" 2m49s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.80/23] from ovn-kubernetes 63s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m48s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 706ms (706ms including waiting). Image size: 307783610 bytes. 63s Normal Created pod/cassandra-1 Created container: cassandra 63s Normal Started pod/cassandra-1 Started container cassandra 2m39s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 884ms (884ms including waiting). Image size: 307783610 bytes. 0s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(ec34e515-2485-414e-a79c-c16b541880e9) 2m23s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 592ms (592ms including waiting). Image size: 307783610 bytes. 111s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 702ms (702ms including waiting). Image size: 307783610 bytes. 63s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 771ms (771ms including waiting). Image size: 307783610 bytes. 2m47s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m47s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m46s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-4-228.ec2.internal 2m46s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-97bfe41a-19ce-48a5-9660-dad6b58bd99b" 2m44s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.54/23] from ovn-kubernetes 62s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m43s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 531ms (531ms including waiting). Image size: 307783610 bytes. 61s Normal Created pod/cassandra-2 Created container: cassandra 61s Normal Started pod/cassandra-2 Started container cassandra 2m38s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 742ms (742ms including waiting). Image size: 307783610 bytes. 5s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(70696d17-3889-4cc9-a0f0-1b5290e6565a) 2m20s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 615ms (615ms including waiting). Image size: 307783610 bytes. 110s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 512ms (512ms including waiting). Image size: 307783610 bytes. 61s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 549ms (549ms including waiting). Image size: 307783610 bytes. 3m7s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m7s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m7s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-76d322d9-34ad-47bf-9b97-021f9f6343c2 2m55s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m55s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 2m55s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-889ee5c0-e4e7-418a-ab9f-b227e11d0580 2m47s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m47s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m47s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-97bfe41a-19ce-48a5-9660-dad6b58bd99b 3m7s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m7s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m55s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m55s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m47s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m47s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:07:26.488 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 08:07:26.489 (3m24.499s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:07:26.489 2025/08/11 08:07:26 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:07:26.489 2025/08/11 08:07:26 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:07:26.489 STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:07:26.489 STEP: Run must-gather because the spec failed @ 08/11/25 08:07:26.489 2025/08/11 08:07:26 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:07:26 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:08:16 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:08:16 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:08:16.007 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:08:16.007 (49.519s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:16.007 2025/08/11 08:08:16 Cleaning app 2025/08/11 08:08:16 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:08:45 2025-08-11 08:08:17,488 p=32973 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:08:17,488 p=32973 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:17,750 p=32973 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:08:17,751 p=32973 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:18,020 p=32973 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:08:18,021 p=32973 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:18,266 p=32973 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:08:18,266 p=32973 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:18,280 p=32973 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:08:18,280 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:18,297 p=32973 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:08:18,298 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:18,309 p=32973 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:08:18,309 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:08:18,601 p=32973 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:08:18,601 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:18,626 p=32973 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:08:18,627 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:18,642 p=32973 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:08:18,643 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:18,644 p=32973 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:08:19,197 p=32973 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:08:19,197 p=32973 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:44,963 p=32973 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-08-11 08:08:44,963 p=32973 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:08:44,963 p=32973 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:45,274 p=32973 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:08:45,274 p=32973 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:45.316 (29.309s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:45.316 2025/08/11 08:08:45 Cleaning setup resources for the backup 2025/08/11 08:08:45 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:08:45 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:08:45 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:45.347 (31ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:45.347 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:08:45.354 (7ms) Attempt #2 Failed. Retrying ↺ @ 08/11/25 08:08:45.354 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 08:08:45.354 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 08/11/25 08:08:45.366 (12ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:08:45.366 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:08:45.366 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 08:08:45.366 2025/08/11 08:08:45 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-06d3fcca-ad79-469e-8877-eb05a2b44fd6 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-340d7573-c1a3-4728-bc67-80fa2ad961f6 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-3d0cab8e-ec52-475d-ab55-8cb2b86f598d mysql-9582604b-7688-11f0-aa2b-0a580a83369f-485f956d-5879-4e3f-8d38-b134a0cc4d02 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-644cd91b-04a8-4eab-ad15-b1306550fae8 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-6f08cf33-a40b-4a60-b581-3bebe46adfce mysql-9582604b-7688-11f0-aa2b-0a580a83369f-c2538cd2-8e18-419a-bd06-4c81652a9c75 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-cc3d877a-7ac3-4cee-a1a8-f9e2abd1db2d mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e700ab60-2238-4206-a195-78370499bdb7 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-301cbaef-8eac-44af-8fb9-34cbed602cde ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-6bc93dee-77c5-429b-8b5c-bffc23fc8510 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-8fc0348f-b2a1-49c6-a187-20b52c9c7c1e ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-bc58bbc2-858f-4ad3-8d08-8c3b16e0a8c0 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-0e4bc74a-614e-4fe3-8286-16061920d3ef ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-f0d6f093-07f8-47dd-8d2e-7bace4dee5cc ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-826aa1e9-4f83-41ff-8283-e5d77494fb6e ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-f764bd3f-4b3b-485a-bbd2-1771d8006a3f todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-2cd88f49-276b-4fc1-8747-7ed68530f827 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-32a2b9c7-ab28-4b6d-b410-6a3078ea4085 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-5619ccac-e42f-4994-af2b-90fafe2960ea todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-5636da0f-e644-4ae0-a8d4-a2e8b6c843e6 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-a51466b9-080f-4869-9529-2157ecb59f4e todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-cf4a4a6e-bca4-4430-8479-e0640131b845 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-d4fb7285-34ec-4dd8-bad8-f6afbe625a5a todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-ed70a106-65fb-41f0-99a0-34232ab52edd todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-0bb4b932-d8ea-45b4-b5b3-4b2ffe6e32de todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-196b1198-5e36-4a9b-8f5a-7a1611246aeb todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-6d01951d-9336-4a35-85f5-27d0dd1f134a todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-779b4f4e-39cb-47ea-a702-6b9d908fd68b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-77f15b21-989f-453a-be40-ae51f286fa27 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-8df0daa6-bbf4-4c74-b4fe-2d6e08e42c6c todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-c2cc1d8e-004f-4774-a92c-2b7402db49ca todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-ed2b4b38-d08c-429d-ace6-c27eada3f3e8 STEP: Create DPA CR @ 08/11/25 08:08:49.984 2025/08/11 08:08:49 native-datamover 2025/08/11 08:08:49 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "b015692f-8e03-4f06-bbd6-5f93f85ded03", "resourceVersion": "100322", "generation": 1, "creationTimestamp": "2025-08-11T08:08:49Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:08:49Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:08:50.006 2025/08/11 08:08:50 Waiting for velero pod to be running 2025/08/11 08:08:55 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:08:55.023 2025/08/11 08:08:55 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 08:08:55 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:08:55 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:08:55 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 08/11/25 08:08:55.153 2025/08/11 08:08:55 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.167332", "end": "2025-08-11 08:12:06.011830", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:12:05.844498", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:12:06 2025-08-11 08:08:56,578 p=33203 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:08:56,578 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:56,814 p=33203 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:08:56,814 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:57,060 p=33203 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:08:57,060 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:57,301 p=33203 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:08:57,301 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:57,317 p=33203 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:08:57,317 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:57,338 p=33203 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:08:57,339 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:57,352 p=33203 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:08:57,352 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:08:57,666 p=33203 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:08:57,666 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:57,694 p=33203 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:08:57,695 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:57,715 p=33203 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:08:57,715 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:57,717 p=33203 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:08:58,272 p=33203 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:08:58,272 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:59,068 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 08:08:59,068 p=33203 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:08:59,068 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:08:59,454 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 08:08:59,454 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:08:59,753 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 08:08:59,753 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:09:00,518 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 08:09:00,518 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:09:01,185 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 08:09:01,185 p=33203 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 08:09:01,185 p=33203 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:09:01,884 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 08:09:07,477 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-08-11 08:09:13,116 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 08:09:13,117 p=33203 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:09:13,751 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 08:09:20,847 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 08:09:26,188 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 08:09:31,501 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:09:40,851 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:09:46,185 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:09:51,508 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:09:56,860 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:10:02,243 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:10:11,149 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:10:16,598 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:10:22,063 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:10:27,452 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:10:32,776 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:10:38,107 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:10:43,446 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:10:48,812 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:10:54,174 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:10:59,517 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:11:07,143 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:11:12,535 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:11:17,889 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:11:23,261 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:11:28,624 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:11:33,987 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:11:39,324 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:11:44,686 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:11:50,016 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:11:55,349 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:12:00,686 p=33203 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:12:06,033 p=33203 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:12:06,033 p=33203 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.167332", "end": "2025-08-11 08:12:06.011830", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:12:05.844498", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:12:06,034 p=33203 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:12:06,034 p=33203 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/08/11 08:12:06 LAST SEEN TYPE REASON OBJECT MESSAGE 12m Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m5s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-114-0.ec2.internal 3m4s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-5e70b72d-515f-4216-943d-e74b732c9274" 3m Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.129/23] from ovn-kubernetes 66s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 655ms (655ms including waiting). Image size: 307783610 bytes. 65s Normal Created pod/cassandra-0 Created container: cassandra 65s Normal Started pod/cassandra-0 Started container cassandra 2m52s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 650ms (650ms including waiting). Image size: 307783610 bytes. 10s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(a14b2eab-6b47-4986-9fad-121c8cbb7428) 2m31s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 642ms (642ms including waiting). Image size: 307783610 bytes. 2m2s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 647ms (647ms including waiting). Image size: 307783610 bytes. 66s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 608ms (608ms including waiting). Image size: 307783610 bytes. 12m Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-60-252.ec2.internal 2m58s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-54f74bb2-c754-4ec1-a4c2-807cc8aa0b50" 2m53s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.85/23] from ovn-kubernetes 52s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m52s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 756ms (756ms including waiting). Image size: 307783610 bytes. 51s Normal Created pod/cassandra-1 Created container: cassandra 51s Normal Started pod/cassandra-1 Started container cassandra 2m43s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 611ms (611ms including waiting). Image size: 307783610 bytes. 5s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(532b8499-0646-4919-ba98-3a11d56aac92) 2m23s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 725ms (725ms including waiting). Image size: 307783610 bytes. 110s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 846ms (846ms including waiting). Image size: 307783610 bytes. 52s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 951ms (951ms including waiting). Image size: 307783610 bytes. 12m Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m50s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m50s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m50s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-4-228.ec2.internal 2m50s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-52226b96-a1ca-4f77-b86d-f5fdeb4dcdc4" 2m48s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.56/23] from ovn-kubernetes 66s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m47s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 572ms (572ms including waiting). Image size: 307783610 bytes. 65s Normal Created pod/cassandra-2 Created container: cassandra 65s Normal Started pod/cassandra-2 Started container cassandra 2m41s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 678ms (678ms including waiting). Image size: 307783610 bytes. 4s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(177d34fc-4def-4ecc-b43b-4de74d6a5440) 2m20s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 692ms (692ms including waiting). Image size: 307783610 bytes. 112s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 699ms (699ms including waiting). Image size: 307783610 bytes. 65s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 607ms (607ms including waiting). Image size: 307783610 bytes. 3m5s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m5s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m5s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-5e70b72d-515f-4216-943d-e74b732c9274 2m59s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m59s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 2m58s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-54f74bb2-c754-4ec1-a4c2-807cc8aa0b50 2m51s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m51s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m51s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-52226b96-a1ca-4f77-b86d-f5fdeb4dcdc4 3m5s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m5s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m59s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m59s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m51s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m51s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:12:06.184 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 08/11/25 08:12:06.184 (3m20.818s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:12:06.184 2025/08/11 08:12:06 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:12:06.184 2025/08/11 08:12:06 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:12:06.184 STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:12:06.184 STEP: Run must-gather because the spec failed @ 08/11/25 08:12:06.184 2025/08/11 08:12:06 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:12:06 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:12:55 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:12:55 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:12:55.942 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:12:55.942 (49.758s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:12:55.942 2025/08/11 08:12:55 Cleaning app 2025/08/11 08:12:55 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:13:25 2025-08-11 08:12:57,631 p=34607 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:12:57,631 p=34607 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:12:57,888 p=34607 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:12:57,888 p=34607 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:12:58,165 p=34607 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:12:58,165 p=34607 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:12:58,455 p=34607 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:12:58,456 p=34607 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:12:58,470 p=34607 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:12:58,470 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:12:58,488 p=34607 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:12:58,488 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:12:58,500 p=34607 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:12:58,500 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:12:58,843 p=34607 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:12:58,844 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:12:58,876 p=34607 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:12:58,876 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:12:58,899 p=34607 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:12:58,899 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:12:58,901 p=34607 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:12:59,483 p=34607 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:12:59,484 p=34607 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:25,401 p=34607 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-08-11 08:13:25,402 p=34607 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:13:25,402 p=34607 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:25,771 p=34607 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:13:25,771 p=34607 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:13:25.831 (29.889s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:13:25.831 2025/08/11 08:13:25 Cleaning setup resources for the backup 2025/08/11 08:13:25 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:13:25 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:13:25 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:13:25.883 (52ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:13:25.883 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 08/11/25 08:13:25.892 (8ms) • [FAILED] [848.844 seconds] [datamover] DataMover: Backup/Restore stateful application with CSI  [It] [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:130 [FAILED] Unexpected error: <*errors.Error | 0xc000294600>: Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"admin_kubeconfig":"/home/jenkins/.kube/config","namespace":"test-oadp-440","non_admin_user":false,"use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra","user_kubeconfig":"/home/jenkins/.kube/config","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"admin_kubeconfig\":\"/home/jenkins/.kube/config\",\"namespace\":\"test-oadp-440\",\"non_admin_user\":false,\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra\",\"user_kubeconfig\":\"/home/jenkins/.kube/config\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:12:06.184 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSS ------------------------------ Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 > Enter [BeforeEach] Backup hooks tests @ 08/11/25 08:13:25.892 < Exit [BeforeEach] Backup hooks tests @ 08/11/25 08:13:25.904 (12ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:13:25.904 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:13:25.904 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:13:25.904 2025/08/11 08:13:25 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-0d062b8f-ea02-44e8-a04c-187acef3b050 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-36b4763b-39d6-4ca4-80aa-afe825e9fdf0 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-4d4b0fa9-1b5e-4b7b-9d87-ff5421c58102 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-5f7c96b8-de34-4f1d-9039-34261e1fcabc mysql-9582604b-7688-11f0-aa2b-0a580a83369f-63bd60c8-e2e9-4bb1-a20d-defcc20aaee8 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-71721b48-3468-4e60-84b8-3224cd91ecf5 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-7c794915-da6e-4015-9774-a56b37ae16f3 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-cf93ea0d-c2d1-42d2-803c-8df77b35eb8d mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e98ab3d5-4a8d-4890-a760-d38eeeccab0f ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-b00b8c80-0ccb-45e6-a9b4-9dd3f6c47ef8 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-fe56b09c-125d-4ca0-a5a2-7744778fa395 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-9cd78ba0-7edb-4d88-98ee-b28de1f73ce6 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-b6c6f01e-1c0f-42df-ad59-95f786011370 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-300927e4-7a12-4fb4-9fc1-383a515fc5cd ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-f3b6df47-498a-4b56-88fa-8fd034689fec ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-023243b0-a66c-4c87-8ac3-b6c0f3d1227c ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-754cc0a5-8a0c-431d-9815-f20f8dd09148 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-69223fd4-cc16-4a23-989b-d05b9741126d todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-7722ac82-8120-41c6-8dc4-a135c0a23261 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-94351c0c-896f-4cc8-90a6-1cfb0271f9fc todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-95279e00-fb50-498e-9783-bd7ef6371236 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-a6ffbddf-abbe-49fe-93a6-43faec881a42 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-c26ac262-98e4-4b6e-86e5-9a28bf72e693 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-c7b6e39c-5513-4691-b606-6869f5ef2872 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-d5ca3387-5f4e-4577-809c-aa4b1f6ffefd todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-10b013df-da73-4fbd-8783-54b4cea56f78 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-40123e82-974b-48fa-b67e-851bcdaeb477 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-4b36e0ca-14f2-4d08-bfa3-b8ba8b36c48b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-6bf2ceb3-0902-454a-9be6-a20efc6cac42 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-b61c0631-e85c-42a6-8366-c4bb4f183a90 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-db8c5c9f-a60a-4d9f-8768-8e9cd6f29655 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-dc3d60a6-754c-47ad-a2b3-66c75290fa2d todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-eb7d62ca-7d80-4bf9-9d41-6e5053d3c39c STEP: Create DPA CR @ 08/11/25 08:13:30.53 2025/08/11 08:13:30 restic 2025/08/11 08:13:30 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "806114be-9a2d-4b56-9e33-1d896647126d", "resourceVersion": "105233", "generation": 1, "creationTimestamp": "2025-08-11T08:13:30Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:13:30Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:13:30.558 2025/08/11 08:13:30 Waiting for velero pod to be running 2025/08/11 08:13:35 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:13:35.577 2025/08/11 08:13:35 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 08/11/25 08:13:35.588 2025/08/11 08:13:35 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.188159", "end": "2025-08-11 08:16:52.190404", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:16:52.002245", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:16:52 2025-08-11 08:13:37,161 p=34835 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:13:37,162 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:37,482 p=34835 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:13:37,482 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:37,778 p=34835 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:13:37,778 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:38,031 p=34835 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:13:38,031 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:38,047 p=34835 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:13:38,047 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:38,066 p=34835 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:13:38,066 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:38,081 p=34835 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:13:38,082 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:13:38,436 p=34835 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:13:38,436 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:38,470 p=34835 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:13:38,471 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:38,496 p=34835 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:13:38,497 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:38,499 p=34835 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:13:39,132 p=34835 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:13:39,132 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:40,295 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 08:13:40,295 p=34835 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:13:40,295 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:40,817 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 08:13:40,817 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:41,250 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 08:13:41,250 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:42,372 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 08:13:42,372 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:43,146 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 08:13:43,146 p=34835 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 08:13:43,146 p=34835 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:13:43,857 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 08:13:49,575 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-08-11 08:13:55,412 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 08:13:55,413 p=34835 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:13:59,452 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 08:14:06,951 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 08:14:12,342 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 08:14:17,787 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:14:24,862 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:14:30,297 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:14:35,804 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:14:41,257 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:14:46,703 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:14:55,945 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:15:01,400 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:15:06,927 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:15:12,303 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:15:17,669 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:15:23,171 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:15:28,713 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:15:34,067 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:15:39,406 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:15:44,819 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:15:52,463 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:15:57,923 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:16:03,361 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:16:08,736 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:16:14,129 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:16:19,478 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:16:24,995 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:16:30,437 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:16:35,908 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:16:41,373 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:16:46,796 p=34835 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:16:52,216 p=34835 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:16:52,217 p=34835 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.188159", "end": "2025-08-11 08:16:52.190404", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:16:52.002245", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:16:52,218 p=34835 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:16:52,218 p=34835 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/08/11 08:16:52 LAST SEEN TYPE REASON OBJECT MESSAGE 3m9s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m9s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m9s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-114-0.ec2.internal 3m9s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-f78ab2ed-6674-4d83-b3e3-4fa09ea836ef" 3m Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.134/23] from ovn-kubernetes 67s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 866ms (866ms including waiting). Image size: 307783610 bytes. 66s Normal Created pod/cassandra-0 Created container: cassandra 66s Normal Started pod/cassandra-0 Started container cassandra 2m52s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 697ms (697ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(5471b76e-b656-40d9-a58c-16c5d794731d) 2m34s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 652ms (652ms including waiting). Image size: 307783610 bytes. 2m3s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 646ms (646ms including waiting). Image size: 307783610 bytes. 67s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 678ms (678ms including waiting). Image size: 307783610 bytes. 2m58s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-60-252.ec2.internal 2m58s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-d274fe3d-0953-40f8-aa92-5bba32f77e6c" 2m52s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.91/23] from ovn-kubernetes 64s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m52s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 677ms (677ms including waiting). Image size: 307783610 bytes. 64s Normal Created pod/cassandra-1 Created container: cassandra 64s Normal Started pod/cassandra-1 Started container cassandra 2m43s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 821ms (821ms including waiting). Image size: 307783610 bytes. 7s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(43a1b0b5-7bea-4e4a-87b1-c4354cc5f47c) 2m26s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 826ms (826ms including waiting). Image size: 307783610 bytes. 116s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 784ms (784ms including waiting). Image size: 307783610 bytes. 64s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 482ms (482ms including waiting). Image size: 307783610 bytes. 2m51s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m51s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m51s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-4-228.ec2.internal 2m50s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-80985c38-ceee-4d9e-97aa-d1c3156e4c0a" 2m47s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.58/23] from ovn-kubernetes 54s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m47s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 656ms (656ms including waiting). Image size: 307783610 bytes. 53s Normal Created pod/cassandra-2 Created container: cassandra 53s Normal Started pod/cassandra-2 Started container cassandra 2m20s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 574ms (574ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(d396f17a-8a6a-4888-bf67-9472924d3788) 110s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 885ms (885ms including waiting). Image size: 307783610 bytes. 53s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 582ms (582ms including waiting). Image size: 307783610 bytes. 3m9s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m9s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m9s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-f78ab2ed-6674-4d83-b3e3-4fa09ea836ef 2m59s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m59s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 2m59s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-d274fe3d-0953-40f8-aa92-5bba32f77e6c 2m51s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m51s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m51s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-80985c38-ceee-4d9e-97aa-d1c3156e4c0a 3m9s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m9s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m59s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m59s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m51s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m51s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:16:52.458 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:16:52.459 (3m26.555s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:16:52.459 2025/08/11 08:16:52 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:16:52.459 2025/08/11 08:16:52 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:16:52.459 STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:16:52.459 2025/08/11 08:16:52 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic does not exists, creating new folder with the name: logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic STEP: Run must-gather because the spec failed @ 08/11/25 08:16:52.459 2025/08/11 08:16:52 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:16:52 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:17:42 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:17:42 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/08/11 08:17:42 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:17:42.029 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:17:42.029 (49.571s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:17:42.029 2025/08/11 08:17:42 Cleaning app 2025/08/11 08:17:42 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:18:13 2025-08-11 08:17:44,216 p=36128 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:17:44,216 p=36128 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:17:44,563 p=36128 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:17:44,563 p=36128 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:17:44,962 p=36128 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:17:44,963 p=36128 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:17:45,367 p=36128 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:17:45,367 p=36128 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:17:45,388 p=36128 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:17:45,388 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:17:45,412 p=36128 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:17:45,413 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:17:45,436 p=36128 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:17:45,436 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:17:45,857 p=36128 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:17:45,858 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:17:45,900 p=36128 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:17:45,901 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:17:45,925 p=36128 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:17:45,925 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:17:45,928 p=36128 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:17:46,661 p=36128 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:17:46,662 p=36128 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:12,920 p=36128 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-08-11 08:18:12,920 p=36128 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:18:12,920 p=36128 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:13,318 p=36128 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:18:13,319 p=36128 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:18:13.401 (31.372s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:18:13.401 2025/08/11 08:18:13 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:18:13.401 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:18:13.401 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:18:13.418 (17ms) Attempt #1 Failed. Retrying ↺ @ 08/11/25 08:18:13.418 > Enter [BeforeEach] Backup hooks tests @ 08/11/25 08:18:13.418 < Exit [BeforeEach] Backup hooks tests @ 08/11/25 08:18:13.425 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:18:13.425 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:18:13.425 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:18:13.425 2025/08/11 08:18:13 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-07ff5397-4ddb-422a-b566-5115d69cd429 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-3d99d265-638f-42de-a052-eb0b21eeab05 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-444c3a88-c12d-4e4b-8aa2-334f3f5d426f mysql-9582604b-7688-11f0-aa2b-0a580a83369f-66b55e26-a999-4287-ab6f-e8af623d128d mysql-9582604b-7688-11f0-aa2b-0a580a83369f-baad4c21-260a-49e7-bbce-595343aba12f mysql-9582604b-7688-11f0-aa2b-0a580a83369f-c426b5d0-e008-42cf-9778-3831f94857bc mysql-9582604b-7688-11f0-aa2b-0a580a83369f-cec8ae83-f134-4989-9e77-8194e453f69d mysql-9582604b-7688-11f0-aa2b-0a580a83369f-d867554c-03c9-409f-9165-d97f1d51e89b mysql-9582604b-7688-11f0-aa2b-0a580a83369f-ec685534-2cf4-44c1-8280-7ccb5c378554 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-01d12d65-d9f0-4cf7-b858-07a95a445293 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-a4155b97-ec60-48f7-843f-c2d7c3ce1434 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-72c79643-192e-4f81-b482-779a1c8e9ddc ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-c5d9d9e4-d87a-4635-a642-a01ea955d645 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-6c63b981-8c35-422b-9e54-acad19b0f626 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-73e4d080-c8d0-4de8-9b9e-b86a5556afcd ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-f28bea8c-8ebf-4854-8d66-f1417427a49e ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-f920d9a9-3a8d-481c-b8dd-aafec3eb3a3a todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-018d19fb-5232-43ec-ba40-49e50c4fe1a5 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-3a58aa80-5646-4d43-9f44-948f98328471 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-3c696609-6a57-42c2-bc6e-a190a7a8f18b todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-9bafc5b6-268d-46e3-844c-61e74fbd0c75 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-9bdc5b7a-2650-4c16-8d57-67ab2d3a7d45 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-b0656d65-dfd8-4aa3-8bcb-8a1f96de0263 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-d0e42799-9075-4511-afb6-b7572451d07c todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-da9d12b7-91f1-4baa-9e3d-e22f2a4ffdf5 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-032c0b8a-19fa-47bb-9fec-a272937a33f1 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-0c89eca2-218e-48ee-bc05-e897f82f8702 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-1b6b7411-f2f0-4082-b569-85397a2dac57 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-8ad68b1b-ec56-4ed3-a834-a879317ed18f todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-8cd1db04-9846-4d69-b13b-4b4d15f40c05 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-9a62e1e1-ea97-4d2a-8147-80a1c9164ee2 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-dbb6525e-050a-41b6-8a05-2495f16fbb70 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-fea9f286-3adc-4d92-8e9d-3cdf09748e5d STEP: Create DPA CR @ 08/11/25 08:18:18.041 2025/08/11 08:18:18 restic 2025/08/11 08:18:18 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "faea24cc-92a1-42cd-876f-9dc3ad0152cd", "resourceVersion": "110178", "generation": 1, "creationTimestamp": "2025-08-11T08:18:18Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:18:18Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:18:18.067 2025/08/11 08:18:18 Waiting for velero pod to be running 2025/08/11 08:18:23 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:18:23.085 2025/08/11 08:18:23 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 08/11/25 08:18:23.096 2025/08/11 08:18:23 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.172300", "end": "2025-08-11 08:21:35.706648", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:21:35.534348", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:21:35 2025-08-11 08:18:25,046 p=36343 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:18:25,046 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:25,334 p=36343 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:18:25,335 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:25,654 p=36343 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:18:25,654 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:26,044 p=36343 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:18:26,045 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:26,067 p=36343 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:18:26,068 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:26,094 p=36343 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:18:26,094 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:26,115 p=36343 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:18:26,116 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:18:26,575 p=36343 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:18:26,575 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:26,624 p=36343 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:18:26,624 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:26,652 p=36343 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:18:26,652 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:26,655 p=36343 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:18:27,343 p=36343 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:18:27,343 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:28,501 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 08:18:28,501 p=36343 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:18:28,501 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:28,951 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 08:18:28,951 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:29,305 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 08:18:29,306 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:30,293 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 08:18:30,293 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:31,117 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 08:18:31,117 p=36343 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 08:18:31,117 p=36343 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:18:32,084 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 08:18:38,063 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 08:18:38,063 p=36343 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:18:42,038 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 08:18:49,752 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 08:18:55,161 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 08:19:00,566 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:19:09,939 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:19:15,305 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:19:20,733 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:19:26,214 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:19:31,525 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:19:36,858 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:19:45,326 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:19:50,771 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:19:56,270 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:20:01,789 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:20:07,354 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:20:12,955 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:20:18,456 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:20:23,989 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:20:29,598 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:20:35,023 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:20:40,503 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:20:47,349 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:20:52,729 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:20:58,123 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:21:03,474 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:21:08,870 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:21:14,281 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:21:19,631 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:21:24,988 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:21:30,371 p=36343 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:21:35,731 p=36343 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:21:35,732 p=36343 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.172300", "end": "2025-08-11 08:21:35.706648", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:21:35.534348", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:21:35,733 p=36343 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:21:35,733 p=36343 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/08/11 08:21:35 LAST SEEN TYPE REASON OBJECT MESSAGE 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-60-252.ec2.internal 3m4s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-8ad7e1cc-31a2-48b8-9b70-887b7a111e0b" 3m Normal AddedInterface pod/cassandra-0 Add eth0 [10.128.2.94/23] from ovn-kubernetes 55s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 592ms (592ms including waiting). Image size: 307783610 bytes. 55s Normal Created pod/cassandra-0 Created container: cassandra 55s Normal Started pod/cassandra-0 Started container cassandra 2m52s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 575ms (575ms including waiting). Image size: 307783610 bytes. 3s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(0d5d3494-b722-460e-989e-23239a7b5002) 2m31s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 969ms (969ms including waiting). Image size: 307783610 bytes. 116s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 688ms (688ms including waiting). Image size: 307783610 bytes. 55s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 711ms (711ms including waiting). Image size: 307783610 bytes. 2m59s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m59s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m58s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-114-0.ec2.internal 2m58s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-081fda79-78f3-4d0a-b146-4e5fff4f2ba6" 2m56s Normal AddedInterface pod/cassandra-1 Add eth0 [10.131.0.140/23] from ovn-kubernetes 55s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m56s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 665ms (665ms including waiting). Image size: 307783610 bytes. 54s Normal Created pod/cassandra-1 Created container: cassandra 54s Normal Started pod/cassandra-1 Started container cassandra 2m47s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 805ms (805ms including waiting). Image size: 307783610 bytes. 8s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(caa4ac15-8b8c-41ea-9362-5d48da3d139e) 2m30s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 547ms (547ms including waiting). Image size: 307783610 bytes. 114s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 677ms (677ms including waiting). Image size: 307783610 bytes. 54s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 815ms (815ms including waiting). Image size: 307783610 bytes. 2m55s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-4-228.ec2.internal 2m54s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-af3c7a2c-a745-43db-966c-b76b6983af4b" 2m44s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.61/23] from ovn-kubernetes 60s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m43s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 748ms (748ms including waiting). Image size: 307783610 bytes. 59s Normal Created pod/cassandra-2 Created container: cassandra 59s Normal Started pod/cassandra-2 Started container cassandra 2m37s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 667ms (667ms including waiting). Image size: 307783610 bytes. 6s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(5de41e10-4bd2-4163-9d4a-2bb0143b324c) 2m20s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 765ms (766ms including waiting). Image size: 307783610 bytes. 105s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 634ms (634ms including waiting). Image size: 307783610 bytes. 59s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 716ms (716ms including waiting). Image size: 307783610 bytes. 3m4s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m4s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m4s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-8ad7e1cc-31a2-48b8-9b70-887b7a111e0b 2m59s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m59s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 2m59s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-081fda79-78f3-4d0a-b146-4e5fff4f2ba6 2m55s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m55s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m55s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-af3c7a2c-a745-43db-966c-b76b6983af4b 3m4s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m4s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m59s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m59s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m55s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m55s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:21:35.934 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:21:35.934 (3m22.509s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:21:35.934 2025/08/11 08:21:35 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:21:35.934 2025/08/11 08:21:35 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:21:35.934 STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:21:35.934 STEP: Run must-gather because the spec failed @ 08/11/25 08:21:35.934 2025/08/11 08:21:35 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:21:35 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:22:25 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:22:25 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/08/11 08:22:25 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:22:25.443 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:22:25.443 (49.509s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:25.443 2025/08/11 08:22:25 Cleaning app 2025/08/11 08:22:25 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:22:55 2025-08-11 08:22:26,896 p=37642 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:22:26,896 p=37642 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:22:27,143 p=37642 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:22:27,143 p=37642 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:22:27,388 p=37642 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:22:27,388 p=37642 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:22:27,672 p=37642 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:22:27,672 p=37642 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:22:27,689 p=37642 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:22:27,689 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:27,709 p=37642 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:22:27,709 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:27,723 p=37642 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:22:27,724 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:22:28,062 p=37642 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:22:28,062 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:28,091 p=37642 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:22:28,092 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:28,111 p=37642 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:22:28,111 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:28,113 p=37642 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:22:28,714 p=37642 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:22:28,714 p=37642 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:22:54,637 p=37642 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-08-11 08:22:54,638 p=37642 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:22:54,638 p=37642 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:22:55,083 p=37642 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:22:55,083 p=37642 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:55.148 (29.705s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:55.148 2025/08/11 08:22:55 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:55.148 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:55.148 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:22:55.164 (16ms) Attempt #2 Failed. Retrying ↺ @ 08/11/25 08:22:55.164 > Enter [BeforeEach] Backup hooks tests @ 08/11/25 08:22:55.164 < Exit [BeforeEach] Backup hooks tests @ 08/11/25 08:22:55.171 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:22:55.171 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:22:55.171 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:22:55.171 2025/08/11 08:22:55 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-10725566-058b-4113-b9b6-4b88e542c907 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-25742749-bc8d-432d-a737-8565cc6f7513 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-36cf801c-d9ee-439e-a24b-a36e98a0aef0 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-431ccf69-bec6-47cc-b3f6-e2e65fa53603 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-72278f3c-f1f9-4525-9a4d-6197093c38ed mysql-9582604b-7688-11f0-aa2b-0a580a83369f-75a583a5-b2ff-4c3b-9290-ad398a82678a mysql-9582604b-7688-11f0-aa2b-0a580a83369f-a50ff03f-bdd4-41ea-98b4-b82df3bb7602 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e3382426-f77e-45f7-a742-891f5cfe7756 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-e3b11401-3a6c-4737-80d6-abd5cd8dff09 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-0e8a9572-2d28-4823-a64a-b6701c9856bf ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-a1245f20-4086-4e62-be39-69bd3d73c02e ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-a30e1f6c-8deb-481c-9fca-018ee41b252f ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-fe5bc73e-59db-4a06-905d-d7941a796b00 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-d11b1518-49b6-4106-b810-d143b5524fa8 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-fb324830-069d-40a1-a250-448aba6bdb89 ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-cb2d62be-9144-420b-adad-fc04c409843b ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-fe065b25-0a69-445d-b0db-e734245dea7f todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-5eaf88ed-2100-416e-9079-7a56c8636847 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-7ca6d45b-ace9-4725-85ce-a7cba4dc850c todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-9839b1b7-679d-482a-b1a1-2cfe682327f7 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-af812ea0-8a88-4de2-97a7-90f03add6e2a todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-d7ffe0f5-4b15-43bd-9137-035fd92ad1eb todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-edd73a7c-159e-4b5b-a57f-b1732836157a todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-ee5b44cf-2fc7-4720-ad93-4a5703ecc2c3 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-f7b667ce-7d72-4be7-89c1-8d405cad49c6 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-041dd22d-9484-4438-aa81-5f7451ec4d08 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-2a68f065-cfc8-4f46-a173-cc33c72a1197 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-6e38c5e0-eb64-4913-9d7f-1e229d4a28ae todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-7b0372d3-9831-4148-bdb4-5051b3877fa5 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-9a2e2ef9-4b0b-4521-ade3-8c411940fa4b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-c7aa5b1e-35c2-4b1f-b215-2e51b510382b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-eb5364e8-0b0b-4663-81b8-c0908af46ae5 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-ff238a10-8e9a-4f05-9a8c-6a8de430a227 STEP: Create DPA CR @ 08/11/25 08:22:59.797 2025/08/11 08:22:59 restic 2025/08/11 08:22:59 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f8addf78-301a-45f8-a758-6ec30d82e95b", "resourceVersion": "115041", "generation": 1, "creationTimestamp": "2025-08-11T08:22:59Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:22:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:22:59.819 2025/08/11 08:22:59 Waiting for velero pod to be running 2025/08/11 08:23:04 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:23:04.832 2025/08/11 08:23:04 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 08/11/25 08:23:04.843 2025/08/11 08:23:04 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.154409", "end": "2025-08-11 08:26:13.678456", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:26:13.524047", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/08/11 08:26:13 2025-08-11 08:23:06,408 p=37870 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:23:06,408 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:06,680 p=37870 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:23:06,681 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:06,950 p=37870 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:23:06,950 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:07,205 p=37870 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:23:07,205 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:07,220 p=37870 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:23:07,220 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:07,240 p=37870 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:23:07,240 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:07,255 p=37870 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:23:07,256 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:23:07,564 p=37870 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:23:07,565 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:07,592 p=37870 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:23:07,592 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:07,610 p=37870 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:23:07,610 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:07,612 p=37870 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:23:08,164 p=37870 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:23:08,164 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:08,957 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-08-11 08:23:08,958 p=37870 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:23:08,958 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:09,345 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-08-11 08:23:09,345 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:09,624 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-08-11 08:23:09,624 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:10,423 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-08-11 08:23:10,423 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:11,088 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-08-11 08:23:11,088 p=37870 u=1002120000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-08-11 08:23:11,088 p=37870 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:23:11,719 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-08-11 08:23:17,333 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-08-11 08:23:17,334 p=37870 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:23:21,452 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-08-11 08:23:28,351 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-08-11 08:23:33,694 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-08-11 08:23:38,998 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-08-11 08:23:47,758 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-08-11 08:23:53,085 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-08-11 08:23:58,427 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-08-11 08:24:03,797 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-08-11 08:24:09,309 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-08-11 08:24:14,756 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-08-11 08:24:23,150 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-08-11 08:24:28,662 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-08-11 08:24:34,106 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-08-11 08:24:39,419 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-08-11 08:24:44,749 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-08-11 08:24:50,156 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-08-11 08:24:55,505 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-08-11 08:25:00,862 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-08-11 08:25:09,747 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-08-11 08:25:15,076 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-08-11 08:25:20,409 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-08-11 08:25:25,754 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-08-11 08:25:31,110 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-08-11 08:25:36,452 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-08-11 08:25:41,794 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-08-11 08:25:47,101 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-08-11 08:25:52,416 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-08-11 08:25:57,762 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-08-11 08:26:03,080 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-08-11 08:26:08,384 p=37870 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-08-11 08:26:13,698 p=37870 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-08-11 08:26:13,699 p=37870 u=1002120000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-6fip6j15-6e951.cspilp.interop.ccitredhat.com:6443 --token sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.154409", "end": "2025-08-11 08:26:13.678456", "msg": "non-zero return code", "rc": 1, "start": "2025-08-11 08:26:13.524047", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-08-11 08:26:13,699 p=37870 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:26:13,700 p=37870 u=1002120000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/08/11 08:26:13 LAST SEEN TYPE REASON OBJECT MESSAGE 3m2s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m2s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m2s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-114-0.ec2.internal 3m2s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-2f71150c-d9df-4cac-b21a-54bcae26319f" 2m59s Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.146/23] from ovn-kubernetes 69s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m58s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 640ms (640ms including waiting). Image size: 307783610 bytes. 68s Normal Created pod/cassandra-0 Created container: cassandra 68s Normal Started pod/cassandra-0 Started container cassandra 2m51s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 661ms (661ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(fa1469e3-476d-40cb-989f-4726f881519e) 2m30s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 554ms (554ms including waiting). Image size: 307783610 bytes. 116s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 819ms (819ms including waiting). Image size: 307783610 bytes. 68s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 723ms (723ms including waiting). Image size: 307783610 bytes. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-60-252.ec2.internal 2m57s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-ac5ddc6b-b4c9-4884-b54e-ab970bec8b33" 2m52s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.99/23] from ovn-kubernetes 55s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m51s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 648ms (648ms including waiting). Image size: 307783610 bytes. 55s Normal Created pod/cassandra-1 Created container: cassandra 55s Normal Started pod/cassandra-1 Started container cassandra 2m43s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 776ms (776ms including waiting). Image size: 307783610 bytes. 1s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(f428e9a6-2dcc-488b-b15f-0af215fbd0b7) 2m25s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 665ms (665ms including waiting). Image size: 307783610 bytes. 112s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 529ms (529ms including waiting). Image size: 307783610 bytes. 55s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 670ms (670ms including waiting). Image size: 307783610 bytes. 2m50s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m50s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m50s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-4-228.ec2.internal 2m50s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-2a9560a8-beba-406f-a752-0dc199b0d547" 2m46s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.63/23] from ovn-kubernetes 49s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m45s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 632ms (632ms including waiting). Image size: 307783610 bytes. 48s Normal Created pod/cassandra-2 Created container: cassandra 48s Normal Started pod/cassandra-2 Started container cassandra 2m39s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 723ms (723ms including waiting). Image size: 307783610 bytes. 5s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(855bf54d-d3d3-4a5a-8797-bd6fb00cc661) 2m19s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 623ms (623ms including waiting). Image size: 307783610 bytes. 106s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 663ms (663ms including waiting). Image size: 307783610 bytes. 48s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 566ms (566ms including waiting). Image size: 307783610 bytes. 3m2s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m2s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m2s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-2f71150c-d9df-4cac-b21a-54bcae26319f 2m57s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 2m57s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m57s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-ac5ddc6b-b4c9-4884-b54e-ab970bec8b33 2m50s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m50s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m50s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-2a9560a8-beba-406f-a752-0dc199b0d547 3m2s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m2s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m57s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m57s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m50s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m50s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:26:13.845 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 08/11/25 08:26:13.845 (3m18.674s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:26:13.845 2025/08/11 08:26:13 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 08/11/25 08:26:13.845 2025/08/11 08:26:13 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 08/11/25 08:26:13.845 STEP: Create a folder for the failed spec if it doesn't exists already @ 08/11/25 08:26:13.845 STEP: Run must-gather because the spec failed @ 08/11/25 08:26:13.845 2025/08/11 08:26:13 Log the present working directory path:- /alabama/cspi/e2e 2025/08/11 08:26:13 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/08/11 08:27:03 Log all the files present in /alabama/cspi/e2e/logs directory 2025/08/11 08:27:03 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/08/11 08:27:03 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 08/11/25 08:27:03.634 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:27:03.634 (49.789s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:03.634 2025/08/11 08:27:03 Cleaning app 2025/08/11 08:27:03 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/08/11 08:27:33 2025-08-11 08:27:05,339 p=39243 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:27:05,339 p=39243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:05,622 p=39243 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:27:05,623 p=39243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:05,873 p=39243 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:27:05,873 p=39243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:06,142 p=39243 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:27:06,142 p=39243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:06,158 p=39243 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:27:06,158 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:06,182 p=39243 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:27:06,182 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:06,197 p=39243 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:27:06,197 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:27:06,511 p=39243 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:27:06,511 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:06,540 p=39243 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:27:06,540 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:06,559 p=39243 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:27:06,560 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:06,561 p=39243 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:27:07,146 p=39243 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:27:07,146 p=39243 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:32,959 p=39243 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-08-11 08:27:32,960 p=39243 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:27:32,960 p=39243 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:33,341 p=39243 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:27:33,341 p=39243 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:33.4 (29.766s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:33.4 2025/08/11 08:27:33 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:33.4 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:33.4 < Exit [DeferCleanup (Each)] Pre exec hook @ 08/11/25 08:27:33.415 (15ms) • [FAILED] [847.523 seconds] Backup hooks tests Pre exec hook [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 [FAILED] Unexpected error: <*errors.Error | 0xc000f88080>: Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"admin_kubeconfig":"/home/jenkins/.kube/config","namespace":"test-oadp-92","non_admin_user":false,"use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra","user_kubeconfig":"/home/jenkins/.kube/config","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"admin_kubeconfig\":\"/home/jenkins/.kube/config\",\"namespace\":\"test-oadp-92\",\"non_admin_user\":false,\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra\",\"user_kubeconfig\":\"/home/jenkins/.kube/config\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 08/11/25 08:26:13.845 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSS > Enter [ReportAfterEach] [upstream-velero] Credentials suite @ 08/11/25 08:27:33.415 < Exit [ReportAfterEach] [upstream-velero] Credentials suite @ 08/11/25 08:27:33.415 (0s) SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [skip-disconnected] Restore hooks tests Successful Init hook [tc-id:OADP-164][interop][smoke] MySQL app with Restic /alabama/cspi/e2e/hooks/restore_hooks.go:132 > Enter [BeforeEach] [skip-disconnected] Restore hooks tests @ 08/11/25 08:27:33.416 < Exit [BeforeEach] [skip-disconnected] Restore hooks tests @ 08/11/25 08:27:33.423 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:27:33.423 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:27:33.423 (0s) > Enter [It] [tc-id:OADP-164][interop][smoke] MySQL app with Restic @ 08/11/25 08:27:33.423 2025/08/11 08:27:33 Delete all downloadrequest mysql-9582604b-7688-11f0-aa2b-0a580a83369f-02dc6025-4f93-4ed5-b545-8292d1404b4f mysql-9582604b-7688-11f0-aa2b-0a580a83369f-0fd5d3be-9905-42d8-a92c-51c3f1b7d2eb mysql-9582604b-7688-11f0-aa2b-0a580a83369f-1b000c87-6c30-4b67-87da-3a817433a9f6 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-2d697380-fa27-458e-bf23-1fb0da58d6b3 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-320d26c0-b1b6-422c-bd1f-9ebcd8c09ee7 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-5a74db36-092d-43ae-85e5-1fdb984268ba mysql-9582604b-7688-11f0-aa2b-0a580a83369f-8bd87051-3fb2-42a9-ac16-eab2975591de mysql-9582604b-7688-11f0-aa2b-0a580a83369f-a54cfd96-d2f9-47a5-b667-fb07ccd22205 mysql-9582604b-7688-11f0-aa2b-0a580a83369f-b08bd2a7-bbf3-46b9-8081-154223704365 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-0c461a5a-c38a-4ba3-a7c9-7983d76cd896 ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f-8d596ccf-523e-4bbe-816c-08b37b7f1b84 ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-a2453c02-b7ae-4ac5-9484-8d0e1f6bc80e ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f-c59392ad-1530-46a9-9003-3793854f10f0 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-8453c51d-cea7-48c5-bb03-1f05d73074d2 ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f-be49146b-48c1-4a4d-8b58-2afd74d2808a ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-6aad2137-8312-4571-a5ea-13fd0719f96b ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f-b7d3d72e-4296-4df1-a29b-6a134a5a3a71 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-08752a03-93ee-4572-87ea-9123e57544ac todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-0f268195-2add-4159-bc6c-976779797309 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-264aee38-fe36-42d6-8e3f-80958d0e7269 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-378f8bd8-8d89-4f7c-86c9-e7ecae2326e5 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-56252c60-bdef-4416-9ee7-e07adfb09d5a todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-71d88bde-5310-4b48-8227-f19c2fa2a5f1 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-80cfd828-d230-48e5-aeaa-68231a321862 todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f-d6a35132-b679-400f-9f0d-123db016bcff todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-1cdc1fb6-5574-48d9-b166-247b2d6d119a todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-2603979d-c790-41a2-bee0-52a6ad7c63da todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-33605e18-46af-4434-ad18-b82607b9351b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-33dd477d-c4be-4a03-b32d-298924b9eb89 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-63b79983-3bcd-4ca6-b1cf-de5d892ed442 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-7e56df3b-d398-4055-bf72-eedb3765b165 todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-b68f138e-8de0-4513-be28-b13f249ffe9b todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-f0d62222-f6f8-44d0-99ac-7fd927b5979e STEP: Create DPA CR @ 08/11/25 08:27:38.038 2025/08/11 08:27:38 restic 2025/08/11 08:27:38 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "602d47a0-f576-48cd-a937-e24ae3a26bd3", "resourceVersion": "119932", "generation": 1, "creationTimestamp": "2025-08-11T08:27:38Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:27:38Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:27:38.06 2025/08/11 08:27:38 Waiting for velero pod to be running 2025/08/11 08:27:43 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:27:43.078 2025/08/11 08:27:43 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql-hooks-e2e @ 08/11/25 08:27:43.176 2025/08/11 08:27:43 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-164] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 08:28:49 2025-08-11 08:27:44,571 p=39471 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:27:44,571 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:44,804 p=39471 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:27:44,804 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:45,043 p=39471 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:27:45,044 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:45,286 p=39471 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:27:45,286 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:45,300 p=39471 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:27:45,300 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:45,318 p=39471 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:27:45,318 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:45,331 p=39471 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:27:45,332 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:27:45,638 p=39471 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:27:45,638 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:45,670 p=39471 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:27:45,670 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:45,687 p=39471 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:27:45,687 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:45,688 p=39471 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:27:46,239 p=39471 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:27:46,240 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:47,033 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-164] *** 2025-08-11 08:27:47,033 p=39471 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:27:47,034 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:27:47,396 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-08-11 08:27:47,396 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:48,249 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-08-11 08:27:48,249 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:27:48,873 p=39471 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-08-11 08:27:54,464 p=39471 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-08-11 08:28:00,065 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-08-11 08:28:00,066 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:00,474 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-08-11 08:28:00,474 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:00,764 p=39471 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-08-11 08:28:06,017 p=39471 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-08-11 08:28:11,276 p=39471 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-08-11 08:28:16,544 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:28:16,544 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:18,245 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-08-11 08:28:18,245 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:18,990 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-08-11 08:28:18,991 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:19,519 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-08-11 08:28:19,519 p=39471 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:19,535 p=39471 u=1002120000 n=ansible INFO| Pausing for 30 seconds 2025-08-11 08:28:49,537 p=39471 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-08-11 08:28:49,538 p=39471 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:49,667 p=39471 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:28:49,667 p=39471 u=1002120000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 08:28:49.736 2025/08/11 08:28:49 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:28:56 2025-08-11 08:28:51,489 p=40044 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:28:51,490 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:51,837 p=40044 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:28:51,837 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:52,136 p=40044 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:28:52,137 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:52,437 p=40044 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:28:52,437 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:52,453 p=40044 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:28:52,453 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:52,473 p=40044 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:28:52,474 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:52,488 p=40044 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:28:52,489 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:28:52,858 p=40044 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:28:52,858 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:52,886 p=40044 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:28:52,886 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:52,904 p=40044 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:28:52,904 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:52,905 p=40044 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:28:53,568 p=40044 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:28:53,569 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:54,732 p=40044 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:28:54,733 p=40044 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:28:55,200 p=40044 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:28:55,201 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:55,610 p=40044 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:28:55,610 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:56,200 p=40044 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:28:56,200 p=40044 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:28:56,205 p=40044 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:28:56,205 p=40044 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/08/11 08:28:56 ExtractTarGz: Create file /tmp/tempDir2037500683/world-db/world.sql 2025/08/11 08:28:56 2025/08/11 08:28:56 {{ } { } [{{ } {mysql-data test-oadp-164 b60414e6-6325-4bca-8f00-684db4f5589b 120258 0 2025-08-11 08:27:48 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1754900868 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-b60414e6-6325-4bca-8f00-684db4f5589b 0xc000dccf00 0xc000dccf10 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-164 10dd764d-ac6e-46f7-84e9-b172e9916dcf 120261 0 2025-08-11 08:27:48 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1754900868 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:27:48 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-10dd764d-ac6e-46f7-84e9-b172e9916dcf 0xc000dcd0a0 0xc000dcd0b0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:28:56.74 2025/08/11 08:28:56 Wait until backup mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 08:29:16 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/08/11 08:29:16 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-08-11T08:29:01Z" generateName: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/backup-name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac velero.io/pvc-uid: 10dd764d-ac6e-46f7-84e9-b172e9916dcf managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"8db6e4c4-b905-49c4-a857-8cb9179b65ac"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:29:01Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:29:09Z" name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f-x6d8b namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac resourceVersion: "121421" uid: e30253b8-20b6-461b-9f30-799cbcf770b5 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-60-252.ec2.internal pod: kind: Pod name: mysql-64c9d6466-mkhxs namespace: test-oadp-164 uid: b6c281f5-d426-47fa-81ee-da91fbe61aaa repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-164 tags: backup: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f backup-uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac ns: test-oadp-164 pod: mysql-64c9d6466-mkhxs pod-uid: b6c281f5-d426-47fa-81ee-da91fbe61aaa pvc-uid: 10dd764d-ac6e-46f7-84e9-b172e9916dcf volume: mysql-data1 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:29:09Z" path: /host_pods/b6c281f5-d426-47fa-81ee-da91fbe61aaa/volumes/kubernetes.io~csi/pvc-10dd764d-ac6e-46f7-84e9-b172e9916dcf/mount phase: Completed progress: bytesDone: 105256269 totalBytes: 105256269 snapshotID: 77b3ffad startTimestamp: "2025-08-11T08:29:06Z" 2025/08/11 08:29:16 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-08-11T08:29:01Z" generateName: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/backup-name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac velero.io/pvc-uid: b60414e6-6325-4bca-8f00-684db4f5589b managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"8db6e4c4-b905-49c4-a857-8cb9179b65ac"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:29:01Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:29:03Z" name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f-zx7nk namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac resourceVersion: "121343" uid: 7375d3e2-86b1-4471-921a-b532157adeba spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-60-252.ec2.internal pod: kind: Pod name: mysql-64c9d6466-mkhxs namespace: test-oadp-164 uid: b6c281f5-d426-47fa-81ee-da91fbe61aaa repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-164 tags: backup: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f backup-uid: 8db6e4c4-b905-49c4-a857-8cb9179b65ac ns: test-oadp-164 pod: mysql-64c9d6466-mkhxs pod-uid: b6c281f5-d426-47fa-81ee-da91fbe61aaa pvc-uid: b60414e6-6325-4bca-8f00-684db4f5589b volume: mysql-data uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-08-11T08:29:03Z" path: /host_pods/b6c281f5-d426-47fa-81ee-da91fbe61aaa/volumes/kubernetes.io~csi/pvc-b60414e6-6325-4bca-8f00-684db4f5589b/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: b983fa13 startTimestamp: "2025-08-11T08:29:01Z" STEP: Verify backup mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 08:29:16.8 2025/08/11 08:29:16 Backup for case mysql-hooks-e2e succeeded STEP: Delete the appplication resources mysql-hooks-e2e @ 08/11/25 08:29:16.837 STEP: Cleanup Application for case mysql-hooks-e2e @ 08/11/25 08:29:16.837 2025/08/11 08:29:16 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:29:46 2025-08-11 08:29:18,504 p=40361 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:29:18,505 p=40361 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:29:18,787 p=40361 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:29:18,787 p=40361 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:29:19,080 p=40361 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:29:19,080 p=40361 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:29:19,392 p=40361 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:29:19,392 p=40361 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:29:19,406 p=40361 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:29:19,406 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:19,423 p=40361 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:29:19,424 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:19,435 p=40361 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:29:19,435 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:29:19,762 p=40361 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:29:19,762 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:19,787 p=40361 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:29:19,788 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:19,806 p=40361 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:29:19,806 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:19,808 p=40361 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:29:20,403 p=40361 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:29:20,403 p=40361 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:29:46,372 p=40361 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** 2025-08-11 08:29:46,372 p=40361 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:29:46,372 p=40361 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:29:46,649 p=40361 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:29:46,649 p=40361 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/08/11 08:29:46 Creating restore mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f for case mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f STEP: Create restore mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f from backup mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:29:46.702 2025/08/11 08:29:46 Wait until restore mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f is complete restore phase: InProgress restore phase: InProgress restore phase: Completed 2025/08/11 08:30:16 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/08/11 08:30:16 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/08/11 08:30:16 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:29:48Z" generateName: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/pod-uid: 834d6edb-bcd9-4fe3-bb51-478ad6d9f2a8 velero.io/pvc-uid: 1eafd976-6a49-4acf-9140-500c54ecc0db velero.io/restore-name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: a7220c29-086f-40e0-95bd-357106b1006a managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"a7220c29-086f-40e0-95bd-357106b1006a"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:29:48Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:30:08Z" name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f-5ttfl namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f uid: a7220c29-086f-40e0-95bd-357106b1006a resourceVersion: "122486" uid: 5e4198ae-7a63-431c-bbd6-0d72b0a70df5 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-mkhxs namespace: test-oadp-164 uid: 834d6edb-bcd9-4fe3-bb51-478ad6d9f2a8 repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-164 snapshotID: 77b3ffad sourceNamespace: test-oadp-164 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:30:08Z" phase: Completed progress: bytesDone: 105256269 totalBytes: 105256269 startTimestamp: "2025-08-11T08:30:06Z" 2025/08/11 08:30:16 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:29:48Z" generateName: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/pod-uid: 834d6edb-bcd9-4fe3-bb51-478ad6d9f2a8 velero.io/pvc-uid: 31650acb-1eaf-4b1e-81f7-3d08fd024452 velero.io/restore-name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: a7220c29-086f-40e0-95bd-357106b1006a managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"a7220c29-086f-40e0-95bd-357106b1006a"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:29:48Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:30:03Z" name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f-nn99t namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f uid: a7220c29-086f-40e0-95bd-357106b1006a resourceVersion: "122399" uid: b86c5a8e-57b7-4205-bd53-dfce4d7b89e6 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-mkhxs namespace: test-oadp-164 uid: 834d6edb-bcd9-4fe3-bb51-478ad6d9f2a8 repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-164 snapshotID: b983fa13 sourceNamespace: test-oadp-164 uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-08-11T08:30:03Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-08-11T08:30:01Z" STEP: Verify restore mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 08:30:16.762 STEP: Verify Application restore @ 08/11/25 08:30:16.765 STEP: Verify Application deployment for case mysql-hooks-e2e @ 08/11/25 08:30:16.765 2025/08/11 08:30:16 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:30:22 2025-08-11 08:30:18,300 p=40583 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:30:18,300 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:18,583 p=40583 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:30:18,583 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:18,838 p=40583 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:30:18,838 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:19,102 p=40583 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:30:19,102 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:19,120 p=40583 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:30:19,120 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:19,141 p=40583 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:30:19,141 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:19,154 p=40583 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:30:19,154 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:30:19,461 p=40583 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:30:19,461 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:19,490 p=40583 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:30:19,490 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:19,509 p=40583 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:30:19,509 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:19,510 p=40583 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:30:20,062 p=40583 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:30:20,062 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:21,072 p=40583 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:30:21,072 p=40583 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:21,536 p=40583 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:30:21,536 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:21,941 p=40583 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:30:21,941 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:22,495 p=40583 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:30:22,495 p=40583 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:22,499 p=40583 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:30:22,500 p=40583 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/08/11 08:30:22 stderr: ERROR 1049 (42000): Unknown database 'world' < Exit [It] [tc-id:OADP-164][interop][smoke] MySQL app with Restic @ 08/11/25 08:30:22.609 (2m49.186s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:30:22.609 2025/08/11 08:30:22 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:30:22.609 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:22.609 < Exit [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:22.612 (3ms) > Enter [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:22.612 < Exit [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:22.612 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:22.612 2025/08/11 08:30:22 Cleaning app 2025/08/11 08:30:22 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:30:52 2025-08-11 08:30:24,096 p=40906 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:30:24,096 p=40906 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:24,347 p=40906 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:30:24,347 p=40906 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:24,604 p=40906 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:30:24,604 p=40906 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:24,851 p=40906 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:30:24,852 p=40906 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:24,865 p=40906 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:30:24,865 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:24,883 p=40906 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:30:24,883 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:24,895 p=40906 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:30:24,895 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:30:25,192 p=40906 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:30:25,192 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:25,222 p=40906 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:30:25,223 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:25,245 p=40906 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:30:25,245 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:25,247 p=40906 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:30:25,832 p=40906 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:30:25,832 p=40906 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:51,693 p=40906 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** 2025-08-11 08:30:51,693 p=40906 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:30:51,693 p=40906 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:51,971 p=40906 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:30:51,971 p=40906 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:52.028 (29.415s) > Enter [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:52.028 2025/08/11 08:30:52 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:52.028 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:52.028 < Exit [DeferCleanup (Each)] Successful Init hook @ 08/11/25 08:30:52.034 (6ms) • [198.618 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ Backup restore tests Application backup [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:48 > Enter [BeforeEach] Backup restore tests @ 08/11/25 08:30:52.034 < Exit [BeforeEach] Backup restore tests @ 08/11/25 08:30:52.041 (6ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:30:52.041 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:30:52.041 (0s) > Enter [It] [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic @ 08/11/25 08:30:52.041 2025/08/11 08:30:52 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 08:30:52.046 2025/08/11 08:30:52 restic 2025/08/11 08:30:52 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "7b417d49-ef35-4836-bf36-fd0d1f2e7066", "resourceVersion": "123177", "generation": 1, "creationTimestamp": "2025-08-11T08:30:52Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:30:52Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:30:52.147 2025/08/11 08:30:52 Waiting for velero pod to be running 2025/08/11 08:30:52 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 08:30:52 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "7b417d49-ef35-4836-bf36-fd0d1f2e7066", "resourceVersion": "123177", "generation": 1, "creationTimestamp": "2025-08-11T08:30:52Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:30:52Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:30:57.168 2025/08/11 08:30:57 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 08/11/25 08:30:57.178 2025/08/11 08:30:57 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-1077] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 08:32:04 2025-08-11 08:30:58,720 p=41132 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:30:58,720 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:58,980 p=41132 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:30:58,981 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:59,249 p=41132 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:30:59,249 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:59,506 p=41132 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:30:59,506 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:30:59,522 p=41132 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:30:59,522 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:59,543 p=41132 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:30:59,544 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:59,557 p=41132 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:30:59,557 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:30:59,907 p=41132 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:30:59,907 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:59,936 p=41132 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:30:59,936 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:59,954 p=41132 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:30:59,954 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:30:59,956 p=41132 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:31:00,516 p=41132 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:31:00,516 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:31:01,377 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-1077] *** 2025-08-11 08:31:01,378 p=41132 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:31:01,378 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:31:01,755 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-08-11 08:31:01,756 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:02,711 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-08-11 08:31:02,711 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:03,447 p=41132 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-08-11 08:31:09,052 p=41132 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-08-11 08:31:14,670 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-08-11 08:31:14,670 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:31:15,112 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-08-11 08:31:15,112 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:15,421 p=41132 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-08-11 08:31:20,715 p=41132 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-08-11 08:31:26,026 p=41132 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-08-11 08:31:31,300 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:31:31,300 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:32,889 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-08-11 08:31:32,890 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:33,890 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-08-11 08:31:33,891 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:34,449 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-08-11 08:31:34,449 p=41132 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:31:34,465 p=41132 u=1002120000 n=ansible INFO| Pausing for 30 seconds 2025-08-11 08:32:04,468 p=41132 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-08-11 08:32:04,468 p=41132 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:04,580 p=41132 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:32:04,580 p=41132 u=1002120000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 08:32:04.645 2025/08/11 08:32:04 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:32:10 2025-08-11 08:32:06,134 p=41696 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:32:06,134 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:06,379 p=41696 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:32:06,379 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:06,629 p=41696 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:32:06,630 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:06,875 p=41696 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:32:06,876 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:06,890 p=41696 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:32:06,890 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:06,908 p=41696 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:32:06,909 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:06,921 p=41696 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:32:06,922 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:32:07,239 p=41696 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:32:07,239 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:07,268 p=41696 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:32:07,268 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:07,287 p=41696 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:32:07,287 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:07,289 p=41696 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:32:07,885 p=41696 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:32:07,885 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:08,902 p=41696 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:32:08,902 p=41696 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:09,323 p=41696 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:32:09,323 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:09,718 p=41696 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:32:09,718 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:10,282 p=41696 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:32:10,282 p=41696 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:10,288 p=41696 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:32:10,289 p=41696 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/08/11 08:32:10 {{ } { } [{{ } {mysql-data test-oadp-1077 01c78bc8-6d35-44b4-81b3-0fdf1b537c33 123546 0 2025-08-11 08:31:02 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1754901062 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-01c78bc8-6d35-44b4-81b3-0fdf1b537c33 0xc000113630 0xc000113640 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-1077 5b0733b9-ad75-4cf0-b32d-171cc8c7cad9 123547 0 2025-08-11 08:31:02 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1754901062 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:31:02 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-5b0733b9-ad75-4cf0-b32d-171cc8c7cad9 0xc0001138b0 0xc0001138c0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:32:10.355 2025/08/11 08:32:10 Wait until backup mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 08:32:30 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/08/11 08:32:30 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-08-11T08:32:14Z" generateName: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/backup-name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 velero.io/pvc-uid: 5b0733b9-ad75-4cf0-b32d-171cc8c7cad9 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"8214413d-3ef0-492c-b6e0-fbdb8dfb8f59"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:32:14Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:32:22Z" name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f-s2h6d namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 resourceVersion: "124731" uid: 87b143ea-27c5-4e62-9dc2-3d680e674dda spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-114-0.ec2.internal pod: kind: Pod name: mysql-64c9d6466-m99jt namespace: test-oadp-1077 uid: b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259 repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-1077 tags: backup: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f backup-uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 ns: test-oadp-1077 pod: mysql-64c9d6466-m99jt pod-uid: b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259 pvc-uid: 5b0733b9-ad75-4cf0-b32d-171cc8c7cad9 volume: mysql-data1 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:32:22Z" path: /host_pods/b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259/volumes/kubernetes.io~csi/pvc-5b0733b9-ad75-4cf0-b32d-171cc8c7cad9/mount phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 snapshotID: b7cfce11 startTimestamp: "2025-08-11T08:32:20Z" 2025/08/11 08:32:30 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-08-11T08:32:14Z" generateName: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/backup-name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 velero.io/pvc-uid: 01c78bc8-6d35-44b4-81b3-0fdf1b537c33 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"8214413d-3ef0-492c-b6e0-fbdb8dfb8f59"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:32:14Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:32:17Z" name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f-vcscx namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 resourceVersion: "124645" uid: 8c8c99ec-e381-42d3-82f8-341537e799db spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-114-0.ec2.internal pod: kind: Pod name: mysql-64c9d6466-m99jt namespace: test-oadp-1077 uid: b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259 repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-1077 tags: backup: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f backup-uid: 8214413d-3ef0-492c-b6e0-fbdb8dfb8f59 ns: test-oadp-1077 pod: mysql-64c9d6466-m99jt pod-uid: b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259 pvc-uid: 01c78bc8-6d35-44b4-81b3-0fdf1b537c33 volume: mysql-data uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-08-11T08:32:17Z" path: /host_pods/b8bf0bc5-a9cb-41aa-b05f-c166b5ef4259/volumes/kubernetes.io~csi/pvc-01c78bc8-6d35-44b4-81b3-0fdf1b537c33/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: 09cd76b5 startTimestamp: "2025-08-11T08:32:14Z" STEP: Verify backup mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 08:32:30.38 2025/08/11 08:32:30 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 08/11/25 08:32:30.415 STEP: Cleanup Application for case mysql @ 08/11/25 08:32:30.415 2025/08/11 08:32:30 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:32:59 2025-08-11 08:32:31,921 p=42019 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:32:31,921 p=42019 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:32,178 p=42019 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:32:32,178 p=42019 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:32,430 p=42019 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:32:32,430 p=42019 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:32,674 p=42019 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:32:32,674 p=42019 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:32,687 p=42019 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:32:32,688 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:32,705 p=42019 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:32:32,705 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:32,716 p=42019 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:32:32,716 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:32:33,025 p=42019 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:32:33,025 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:33,052 p=42019 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:32:33,052 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:33,069 p=42019 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:32:33,069 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:33,071 p=42019 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:32:33,627 p=42019 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:32:33,627 p=42019 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:32:59,470 p=42019 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** 2025-08-11 08:32:59,470 p=42019 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:32:59,471 p=42019 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:32:59,758 p=42019 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:32:59,758 p=42019 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/08/11 08:32:59 Creating restore mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f for case mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f STEP: Create restore mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f from backup mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:32:59.817 2025/08/11 08:32:59 Wait until restore mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f is complete restore phase: InProgress restore phase: InProgress restore phase: Completed 2025/08/11 08:33:29 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/08/11 08:33:29 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/08/11 08:33:29 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:33:02Z" generateName: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/pod-uid: 16e455a3-9f19-4b9d-a7a6-4b9032bee81f velero.io/pvc-uid: f4e7a1ba-9f4b-456c-a554-c37a16a952d4 velero.io/restore-name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: 3f080114-cc20-4481-9688-89c2d19ab931 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"3f080114-cc20-4481-9688-89c2d19ab931"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:33:02Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:33:18Z" name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f-6fsmd namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f uid: 3f080114-cc20-4481-9688-89c2d19ab931 resourceVersion: "125724" uid: b30b65d0-835b-49e0-9d1f-81b375f8e9f3 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-m99jt namespace: test-oadp-1077 uid: 16e455a3-9f19-4b9d-a7a6-4b9032bee81f repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-1077 snapshotID: 09cd76b5 sourceNamespace: test-oadp-1077 uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-08-11T08:33:18Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-08-11T08:33:16Z" 2025/08/11 08:33:29 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:33:02Z" generateName: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/pod-uid: 16e455a3-9f19-4b9d-a7a6-4b9032bee81f velero.io/pvc-uid: 818204ed-9574-4c5e-9b7c-c3de96abd72d velero.io/restore-name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: 3f080114-cc20-4481-9688-89c2d19ab931 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"3f080114-cc20-4481-9688-89c2d19ab931"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:33:02Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:33:24Z" name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f-6zvhn namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f uid: 3f080114-cc20-4481-9688-89c2d19ab931 resourceVersion: "125824" uid: 20658ad2-6b1e-4233-9fa9-cca2ede22765 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-m99jt namespace: test-oadp-1077 uid: 16e455a3-9f19-4b9d-a7a6-4b9032bee81f repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-6fip6j15-interopoadp/velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f/restic/test-oadp-1077 snapshotID: b7cfce11 sourceNamespace: test-oadp-1077 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:33:24Z" phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 startTimestamp: "2025-08-11T08:33:21Z" STEP: Verify restore mysql-7dbbd427-768d-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 08:33:29.889 STEP: Verify Application restore @ 08/11/25 08:33:29.892 STEP: Verify Application deployment for case mysql @ 08/11/25 08:33:29.892 2025/08/11 08:33:29 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:33:35 2025-08-11 08:33:31,398 p=42244 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:33:31,398 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:31,644 p=42244 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:33:31,645 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:31,900 p=42244 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:33:31,901 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:32,169 p=42244 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:33:32,169 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:32,182 p=42244 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:33:32,183 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:32,200 p=42244 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:33:32,200 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:32,211 p=42244 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:33:32,211 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:33:32,508 p=42244 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:33:32,508 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:32,538 p=42244 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:33:32,538 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:32,556 p=42244 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:33:32,557 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:32,559 p=42244 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:33:33,127 p=42244 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:33:33,127 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:34,132 p=42244 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:33:34,133 p=42244 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:34,574 p=42244 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:33:34,575 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:34,943 p=42244 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:33:34,943 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:35,506 p=42244 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:33:35,506 p=42244 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:35,510 p=42244 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:33:35,510 p=42244 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic @ 08/11/25 08:33:35.569 (2m43.528s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:33:35.569 2025/08/11 08:33:35 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:33:35.569 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:33:35.569 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:33:35.573 (3ms) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:33:35.573 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:33:35.573 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:33:35.573 2025/08/11 08:33:35 Cleaning app 2025/08/11 08:33:35 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:34:04 2025-08-11 08:33:37,036 p=42567 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:33:37,036 p=42567 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:37,278 p=42567 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:33:37,278 p=42567 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:37,522 p=42567 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:33:37,523 p=42567 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:37,767 p=42567 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:33:37,767 p=42567 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:33:37,781 p=42567 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:33:37,781 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:37,801 p=42567 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:33:37,801 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:37,814 p=42567 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:33:37,814 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:33:38,124 p=42567 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:33:38,124 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:38,155 p=42567 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:33:38,156 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:38,174 p=42567 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:33:38,174 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:33:38,176 p=42567 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:33:38,723 p=42567 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:33:38,724 p=42567 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:04,509 p=42567 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** 2025-08-11 08:34:04,509 p=42567 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:34:04,509 p=42567 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:04,774 p=42567 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:34:04,774 p=42567 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:34:04.824 (29.252s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:34:04.824 2025/08/11 08:34:04 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:34:04.825 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:34:04.825 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:34:04.833 (9ms) • [192.799 seconds] ------------------------------ Backup restore tests Application backup [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:62 > Enter [BeforeEach] Backup restore tests @ 08/11/25 08:34:04.834 < Exit [BeforeEach] Backup restore tests @ 08/11/25 08:34:04.84 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:34:04.84 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:34:04.84 (0s) > Enter [It] [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia @ 08/11/25 08:34:04.84 2025/08/11 08:34:04 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 08:34:04.843 2025/08/11 08:34:04 kopia 2025/08/11 08:34:04 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "09284afa-b35a-4eb5-b8cf-4e28b8c9f418", "resourceVersion": "126463", "generation": 1, "creationTimestamp": "2025-08-11T08:34:04Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:34:04Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:34:04.91 2025/08/11 08:34:04 Waiting for velero pod to be running 2025/08/11 08:34:04 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 08:34:04 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "09284afa-b35a-4eb5-b8cf-4e28b8c9f418", "resourceVersion": "126463", "generation": 1, "creationTimestamp": "2025-08-11T08:34:04Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:34:04Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:34:09.946 2025/08/11 08:34:09 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 08/11/25 08:34:09.957 2025/08/11 08:34:09 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-437-kopia] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/08/11 08:35:16 2025-08-11 08:34:11,448 p=42795 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:34:11,448 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:11,714 p=42795 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:34:11,714 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:11,967 p=42795 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:34:11,967 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:12,219 p=42795 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:34:12,219 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:12,233 p=42795 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:34:12,233 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:12,249 p=42795 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:34:12,249 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:12,261 p=42795 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:34:12,261 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:34:12,561 p=42795 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:34:12,561 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:12,587 p=42795 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:34:12,588 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:12,608 p=42795 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:34:12,608 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:12,610 p=42795 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:34:13,164 p=42795 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:34:13,165 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:13,912 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-437-kopia] *** 2025-08-11 08:34:13,913 p=42795 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:34:13,913 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:14,281 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-08-11 08:34:14,281 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:15,158 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-08-11 08:34:15,158 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:15,787 p=42795 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-08-11 08:34:21,382 p=42795 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-08-11 08:34:26,972 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-08-11 08:34:26,972 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:34:27,390 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-08-11 08:34:27,390 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:27,674 p=42795 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-08-11 08:34:32,943 p=42795 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-08-11 08:34:38,214 p=42795 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-08-11 08:34:43,480 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:34:43,480 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:45,107 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-08-11 08:34:45,107 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:45,855 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-08-11 08:34:45,856 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:46,401 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-08-11 08:34:46,402 p=42795 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:34:46,420 p=42795 u=1002120000 n=ansible INFO| Pausing for 30 seconds 2025-08-11 08:35:16,422 p=42795 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-08-11 08:35:16,422 p=42795 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:16,524 p=42795 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:35:16,524 p=42795 u=1002120000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 08:35:16.567 2025/08/11 08:35:16 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:35:21 2025-08-11 08:35:17,969 p=43370 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:35:17,969 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:18,204 p=43370 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:35:18,204 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:18,448 p=43370 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:35:18,448 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:18,691 p=43370 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:35:18,691 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:18,706 p=43370 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:35:18,706 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:18,725 p=43370 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:35:18,725 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:18,739 p=43370 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:35:18,739 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:35:19,033 p=43370 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:35:19,033 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:19,060 p=43370 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:35:19,060 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:19,077 p=43370 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:35:19,077 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:19,079 p=43370 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:35:19,615 p=43370 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:35:19,615 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:20,559 p=43370 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:35:20,559 p=43370 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:20,955 p=43370 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:35:20,955 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:21,314 p=43370 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:35:21,314 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:21,855 p=43370 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:35:21,855 p=43370 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:21,859 p=43370 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:35:21,859 p=43370 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/08/11 08:35:21 {{ } { } [{{ } {mysql-data test-oadp-437-kopia 60f73c36-c4d0-4fc8-9e68-8ab96e918115 126824 0 2025-08-11 08:34:15 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1754901255 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-60f73c36-c4d0-4fc8-9e68-8ab96e918115 0xc000c80cf0 0xc000c80d10 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-437-kopia aa7ad8d0-3748-492a-bb72-e9b105769e00 126827 0 2025-08-11 08:34:15 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1754901255 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:34:15 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-aa7ad8d0-3748-492a-bb72-e9b105769e00 0xc000c80e90 0xc000c80ea0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:35:21.904 2025/08/11 08:35:21 Wait until backup mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 08:35:41 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/08/11 08:35:41 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-08-11T08:35:25Z" generateName: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/backup-name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 velero.io/pvc-uid: aa7ad8d0-3748-492a-bb72-e9b105769e00 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"7d70a8b5-281f-480b-b0f3-1a427f828911"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:35:25Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:35:34Z" name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f-6rvk7 namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 resourceVersion: "127935" uid: 057422d0-5cc0-4752-86c4-5ce50e7db98c spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-60-252.ec2.internal pod: kind: Pod name: mysql-64c9d6466-6pkrj namespace: test-oadp-437-kopia uid: b27943fd-3611-4dbd-a866-f576b1183555 repoIdentifier: "" tags: backup: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f backup-uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 ns: test-oadp-437-kopia pod: mysql-64c9d6466-6pkrj pod-uid: b27943fd-3611-4dbd-a866-f576b1183555 pvc-uid: aa7ad8d0-3748-492a-bb72-e9b105769e00 volume: mysql-data1 uploaderType: kopia volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:35:34Z" path: /host_pods/b27943fd-3611-4dbd-a866-f576b1183555/volumes/kubernetes.io~csi/pvc-aa7ad8d0-3748-492a-bb72-e9b105769e00/mount phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 snapshotID: be6ae6b6fd8ec254d7ffd029383745b7 startTimestamp: "2025-08-11T08:35:31Z" 2025/08/11 08:35:41 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-08-11T08:35:25Z" generateName: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f- generation: 5 labels: velero.io/backup-name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f velero.io/backup-uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 velero.io/pvc-uid: 60f73c36-c4d0-4fc8-9e68-8ab96e918115 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"7d70a8b5-281f-480b-b0f3-1a427f828911"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:35:25Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:35:27Z" name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f-vwbhs namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 resourceVersion: "127841" uid: 3b62fead-1b99-4fb1-94a3-bb91eae5c6b3 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-60-252.ec2.internal pod: kind: Pod name: mysql-64c9d6466-6pkrj namespace: test-oadp-437-kopia uid: b27943fd-3611-4dbd-a866-f576b1183555 repoIdentifier: "" tags: backup: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f backup-uid: 7d70a8b5-281f-480b-b0f3-1a427f828911 ns: test-oadp-437-kopia pod: mysql-64c9d6466-6pkrj pod-uid: b27943fd-3611-4dbd-a866-f576b1183555 pvc-uid: 60f73c36-c4d0-4fc8-9e68-8ab96e918115 volume: mysql-data uploaderType: kopia volume: mysql-data status: completionTimestamp: "2025-08-11T08:35:27Z" path: /host_pods/b27943fd-3611-4dbd-a866-f576b1183555/volumes/kubernetes.io~csi/pvc-60f73c36-c4d0-4fc8-9e68-8ab96e918115/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: cb219de062bcfb8444db8c34c97febb5 startTimestamp: "2025-08-11T08:35:25Z" STEP: Verify backup mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 08:35:41.971 2025/08/11 08:35:42 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 08/11/25 08:35:42.012 STEP: Cleanup Application for case mysql @ 08/11/25 08:35:42.012 2025/08/11 08:35:42 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:36:11 2025-08-11 08:35:43,476 p=43695 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:35:43,477 p=43695 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:43,756 p=43695 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:35:43,756 p=43695 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:44,006 p=43695 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:35:44,006 p=43695 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:44,257 p=43695 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:35:44,257 p=43695 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:35:44,271 p=43695 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:35:44,272 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:44,291 p=43695 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:35:44,291 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:44,305 p=43695 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:35:44,305 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:35:44,606 p=43695 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:35:44,606 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:44,634 p=43695 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:35:44,634 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:44,652 p=43695 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:35:44,652 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:35:44,654 p=43695 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:35:45,204 p=43695 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:35:45,204 p=43695 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:11,022 p=43695 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** 2025-08-11 08:36:11,023 p=43695 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:36:11,023 p=43695 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:11,299 p=43695 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:36:11,299 p=43695 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/08/11 08:36:11 Creating restore mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f for case mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f STEP: Create restore mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f from backup mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f @ 08/11/25 08:36:11.351 2025/08/11 08:36:11 Wait until restore mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f is complete restore phase: InProgress restore phase: InProgress restore phase: Completed 2025/08/11 08:36:41 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/08/11 08:36:41 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/08/11 08:36:41 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:36:13Z" generateName: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/pod-uid: 3a5d41fd-6d58-49a2-9e24-7ac364632076 velero.io/pvc-uid: 1e493a51-3a56-4952-9e21-59fdf0cd182c velero.io/restore-name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: ad8f2222-f18d-485f-9f28-79742a6f7cc2 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"ad8f2222-f18d-485f-9f28-79742a6f7cc2"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:36:13Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:36:30Z" name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f-pm56s namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f uid: ad8f2222-f18d-485f-9f28-79742a6f7cc2 resourceVersion: "128987" uid: 0c3b0170-8b0f-4a31-8d71-125cdc511900 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-6pkrj namespace: test-oadp-437-kopia uid: 3a5d41fd-6d58-49a2-9e24-7ac364632076 repoIdentifier: "" snapshotID: cb219de062bcfb8444db8c34c97febb5 sourceNamespace: test-oadp-437-kopia uploaderType: kopia volume: mysql-data status: completionTimestamp: "2025-08-11T08:36:30Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-08-11T08:36:29Z" 2025/08/11 08:36:41 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-08-11T08:36:13Z" generateName: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f- generation: 4 labels: velero.io/pod-uid: 3a5d41fd-6d58-49a2-9e24-7ac364632076 velero.io/pvc-uid: 3e5ea32b-97af-4a3c-9a4c-b6ec814029bc velero.io/restore-name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f velero.io/restore-uid: ad8f2222-f18d-485f-9f28-79742a6f7cc2 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"ad8f2222-f18d-485f-9f28-79742a6f7cc2"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-08-11T08:36:13Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-08-11T08:36:37Z" name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f-pz7xn namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f uid: ad8f2222-f18d-485f-9f28-79742a6f7cc2 resourceVersion: "129071" uid: 7901afe4-391f-4f06-91bf-7a8b75e99ae8 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-6pkrj namespace: test-oadp-437-kopia uid: 3a5d41fd-6d58-49a2-9e24-7ac364632076 repoIdentifier: "" snapshotID: be6ae6b6fd8ec254d7ffd029383745b7 sourceNamespace: test-oadp-437-kopia uploaderType: kopia volume: mysql-data1 status: completionTimestamp: "2025-08-11T08:36:37Z" phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 startTimestamp: "2025-08-11T08:36:34Z" STEP: Verify restore mysql-f0a6b867-768d-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 08:36:41.403 STEP: Verify Application restore @ 08/11/25 08:36:41.407 STEP: Verify Application deployment for case mysql @ 08/11/25 08:36:41.407 2025/08/11 08:36:41 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/08/11 08:36:46 2025-08-11 08:36:42,802 p=43920 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:36:42,802 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:43,045 p=43920 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:36:43,045 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:43,280 p=43920 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:36:43,281 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:43,513 p=43920 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:36:43,513 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:43,526 p=43920 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:36:43,527 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:43,544 p=43920 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:36:43,544 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:43,557 p=43920 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:36:43,558 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:36:43,855 p=43920 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:36:43,855 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:43,881 p=43920 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:36:43,881 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:43,897 p=43920 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:36:43,897 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:43,898 p=43920 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:36:44,433 p=43920 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:36:44,433 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:45,366 p=43920 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-08-11 08:36:45,366 p=43920 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:45,743 p=43920 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-08-11 08:36:45,743 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:46,131 p=43920 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-08-11 08:36:46,131 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:46,670 p=43920 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-08-11 08:36:46,671 p=43920 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:46,674 p=43920 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:36:46,675 p=43920 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia @ 08/11/25 08:36:46.718 (2m41.877s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:36:46.718 2025/08/11 08:36:46 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:36:46.718 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:36:46.718 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:36:46.721 (4ms) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:36:46.721 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:36:46.721 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:36:46.721 2025/08/11 08:36:46 Cleaning app 2025/08/11 08:36:46 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/08/11 08:37:15 2025-08-11 08:36:48,126 p=44242 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:36:48,126 p=44242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:48,360 p=44242 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:36:48,361 p=44242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:48,593 p=44242 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:36:48,593 p=44242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:48,835 p=44242 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:36:48,835 p=44242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:36:48,849 p=44242 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:36:48,849 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:48,865 p=44242 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:36:48,865 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:48,876 p=44242 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:36:48,877 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:36:49,171 p=44242 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:36:49,171 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:49,197 p=44242 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:36:49,197 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:49,214 p=44242 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:36:49,214 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:36:49,215 p=44242 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:36:49,760 p=44242 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:36:49,761 p=44242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:15,521 p=44242 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** 2025-08-11 08:37:15,521 p=44242 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:37:15,521 p=44242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:15,774 p=44242 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:37:15,774 p=44242 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:37:15.816 (29.095s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:37:15.816 2025/08/11 08:37:15 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:37:15.816 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:37:15.816 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:37:15.823 (7ms) • [190.989 seconds] ------------------------------ S ------------------------------ Backup restore tests Application backup [tc-id:OADP-122] [interop] [skip-disconnected] Django application with BSL&CSI [exclude_aro-4] /alabama/cspi/e2e/app_backup/backup_restore.go:93 > Enter [BeforeEach] Backup restore tests @ 08/11/25 08:37:15.823 < Exit [BeforeEach] Backup restore tests @ 08/11/25 08:37:15.831 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:37:15.831 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:37:15.831 (0s) > Enter [It] [tc-id:OADP-122] [interop] [skip-disconnected] Django application with BSL&CSI @ 08/11/25 08:37:15.831 2025/08/11 08:37:15 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 08:37:15.835 2025/08/11 08:37:15 csi 2025/08/11 08:37:15 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "363df90f-069d-4ed1-bf79-e05c3e05a834", "resourceVersion": "129744", "generation": 1, "creationTimestamp": "2025-08-11T08:37:15Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:37:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:37:15.985 2025/08/11 08:37:15 Waiting for velero pod to be running 2025/08/11 08:37:15 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 08:37:15 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "363df90f-069d-4ed1-bf79-e05c3e05a834", "resourceVersion": "129744", "generation": 1, "creationTimestamp": "2025-08-11T08:37:15Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:37:15Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:37:21.005 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/08/11 08:37:21 The 'openshift-storage' namespace exists 2025/08/11 08:37:21 Checking default storage class count 2025/08/11 08:37:21 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/08/11 08:37:21 Snapclass 'example-snapclass' doesn't exist, creating 2025/08/11 08:37:21 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:37:21 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case django-persistent @ 08/11/25 08:37:21.304 2025/08/11 08:37:21 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check namespace test-oadp-122] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create namespace test-oadp-122] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create the mtc test django psql persistent template] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create openshift django psql persistent application from openshift templates] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025/08/11 08:37:26 2025-08-11 08:37:22,696 p=44489 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:37:22,696 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:22,936 p=44489 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:37:22,936 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:23,179 p=44489 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:37:23,179 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:23,416 p=44489 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:37:23,417 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:23,430 p=44489 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:37:23,430 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:23,446 p=44489 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:37:23,447 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:23,457 p=44489 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:37:23,457 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:37:23,754 p=44489 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:37:23,754 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:23,781 p=44489 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:37:23,782 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:23,797 p=44489 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:37:23,797 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:23,799 p=44489 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:37:24,335 p=44489 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:37:24,335 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:25,077 p=44489 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check namespace test-oadp-122] *** 2025-08-11 08:37:25,077 p=44489 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:37:25,077 p=44489 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:25,416 p=44489 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create namespace test-oadp-122] *** 2025-08-11 08:37:25,417 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:26,208 p=44489 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create the mtc test django psql persistent template] *** 2025-08-11 08:37:26,209 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:26,666 p=44489 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Create openshift django psql persistent application from openshift templates] *** 2025-08-11 08:37:26,667 p=44489 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:26,878 p=44489 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:37:26,879 p=44489 u=1002120000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=16 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 08:37:26.925 2025/08/11 08:37:26 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] FAILED - RETRYING: [localhost]: Check postgresql pod status (30 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (29 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (28 retries left). FAILED - RETRYING: [localhost]: Check postgresql pod status (27 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check application pod status (30 retries left). FAILED - RETRYING: [localhost]: Check application pod status (29 retries left). FAILED - RETRYING: [localhost]: Check application pod status (28 retries left). FAILED - RETRYING: [localhost]: Check application pod status (27 retries left). FAILED - RETRYING: [localhost]: Check application pod status (26 retries left). FAILED - RETRYING: [localhost]: Check application pod status (25 retries left). FAILED - RETRYING: [localhost]: Check application pod status (24 retries left). FAILED - RETRYING: [localhost]: Check application pod status (23 retries left). FAILED - RETRYING: [localhost]: Check application pod status (22 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** ok: [localhost] => {  "msg": "PASS: # of visits should be 1; actual 1" } PLAY RECAP ********************************************************************* localhost : ok=22  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/08/11 08:38:46 2025-08-11 08:37:28,403 p=44794 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:37:28,403 p=44794 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:28,651 p=44794 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:37:28,651 p=44794 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:28,900 p=44794 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:37:28,901 p=44794 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:29,146 p=44794 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:37:29,147 p=44794 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:37:29,161 p=44794 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:37:29,161 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:29,178 p=44794 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:37:29,178 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:29,189 p=44794 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:37:29,189 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:37:29,485 p=44794 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:37:29,485 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:29,511 p=44794 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:37:29,511 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:29,527 p=44794 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:37:29,527 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:29,529 p=44794 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:37:30,072 p=44794 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:37:30,072 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:30,910 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check postgresql pod status (30 retries left). 2025-08-11 08:37:36,538 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check postgresql pod status (29 retries left). 2025-08-11 08:37:42,114 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check postgresql pod status (28 retries left). 2025-08-11 08:37:47,720 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check postgresql pod status (27 retries left). 2025-08-11 08:37:53,350 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** 2025-08-11 08:37:53,350 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:37:54,012 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (30 retries left). 2025-08-11 08:37:59,645 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (29 retries left). 2025-08-11 08:38:05,244 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (28 retries left). 2025-08-11 08:38:10,845 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (27 retries left). 2025-08-11 08:38:16,434 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (26 retries left). 2025-08-11 08:38:22,026 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (25 retries left). 2025-08-11 08:38:27,613 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (24 retries left). 2025-08-11 08:38:33,197 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (23 retries left). 2025-08-11 08:38:38,785 p=44794 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (22 retries left). 2025-08-11 08:38:44,364 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** 2025-08-11 08:38:44,364 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:38:45,334 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** 2025-08-11 08:38:45,335 p=44794 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:38:45,335 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:38:45,747 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** 2025-08-11 08:38:45,747 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:38:45,770 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** 2025-08-11 08:38:45,770 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:38:46,078 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** 2025-08-11 08:38:46,078 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:38:46,111 p=44794 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** 2025-08-11 08:38:46,112 p=44794 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "PASS: # of visits should be 1; actual 1" } 2025-08-11 08:38:46,115 p=44794 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:38:46,115 p=44794 u=1002120000 n=ansible INFO| localhost : ok=22 changed=4 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025/08/11 08:38:46 {{ } { } [{{ } {postgresql test-oadp-122 f4c35499-432f-447a-bf59-aa0dfb575da1 130066 0 2025-08-11 08:37:26 +0000 UTC map[app:django-psql-persistent template:django-psql-persistent] map[openshift.io/generated-by:OpenShiftNewApp pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:postgresql-1754901446 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{csi-addons-manager Update v1 2025-08-11 08:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-08-11 08:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-08-11 08:37:26 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {oc Update v1 2025-08-11 08:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:openshift.io/generated-by":{}},"f:labels":{".":{},"f:app":{},"f:template":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{1073741824 0} {} 1Gi BinarySI}]} pvc-f4c35499-432f-447a-bf59-aa0dfb575da1 0xc000510950 0xc000510960 nil nil } {Bound [ReadWriteOnce] map[storage:{{1073741824 0} {} 1Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f @ 08/11/25 08:38:46.161 2025/08/11 08:38:46 Wait until backup django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f is completed backup phase: Completed 2025/08/11 08:39:06 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/08/11 08:39:06 Run velero describe on the backup 2025/08/11 08:39:06 [./velero describe backup django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f -n openshift-adp --details --insecure-skip-tls-verify] 2025/08/11 08:39:06 Exec stderr: "" 2025/08/11 08:39:06 Name: django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.2 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-122 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-08-11 08:38:46 +0000 UTC Completed: 2025-08-11 08:39:01 +0000 UTC Expiration: 2025-09-10 08:38:46 +0000 UTC Total items to be backed up: 95 Items backed up: 95 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-122/velero-postgresql-gr47l: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-122/velero-postgresql-gr47l/2025-08-11T08:38:53Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-122/velero-postgresql-gr47l volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-1dcb1d1f-ca5c-4de6-a810-5ca0ae6baa05 Phase: Completed Created: 2025-08-11 08:38:53 +0000 UTC Started: 2025-08-11 08:38:53 +0000 UTC Updated: 2025-08-11 08:38:59 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - reclaimspacecronjobs.csiaddons.openshift.io apps.openshift.io/v1/DeploymentConfig: - test-oadp-122/django-psql-persistent - test-oadp-122/postgresql authorization.openshift.io/v1/RoleBinding: - test-oadp-122/admin - test-oadp-122/system:deployers - test-oadp-122/system:image-builders - test-oadp-122/system:image-pullers build.openshift.io/v1/Build: - test-oadp-122/django-psql-persistent-1 build.openshift.io/v1/BuildConfig: - test-oadp-122/django-psql-persistent csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - test-oadp-122/postgresql-1754901446 discovery.k8s.io/v1/EndpointSlice: - test-oadp-122/django-psql-persistent-cfs5f - test-oadp-122/postgresql-q4ws2 image.openshift.io/v1/ImageStream: - test-oadp-122/django-psql-persistent image.openshift.io/v1/ImageStreamTag: - test-oadp-122/django-psql-persistent:latest image.openshift.io/v1/ImageTag: - test-oadp-122/django-psql-persistent:latest rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-122/admin - test-oadp-122/system:deployers - test-oadp-122/system:image-builders - test-oadp-122/system:image-pullers route.openshift.io/v1/Route: - test-oadp-122/django-psql-persistent snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-122/velero-postgresql-gr47l snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-1dcb1d1f-ca5c-4de6-a810-5ca0ae6baa05 template.openshift.io/v1/Template: - test-oadp-122/mtc-test-django-psql-persistent v1/ConfigMap: - test-oadp-122/django-psql-persistent-1-ca - test-oadp-122/django-psql-persistent-1-global-ca - test-oadp-122/django-psql-persistent-1-sys-config - test-oadp-122/kube-root-ca.crt - test-oadp-122/openshift-service-ca.crt v1/Endpoints: - test-oadp-122/django-psql-persistent - test-oadp-122/postgresql v1/Event: - test-oadp-122/django-psql-persistent-1-build.185aa9916520be39 - test-oadp-122/django-psql-persistent-1-build.185aa9918c8c395b - test-oadp-122/django-psql-persistent-1-build.185aa9918e25f9b5 - test-oadp-122/django-psql-persistent-1-build.185aa992bb628035 - test-oadp-122/django-psql-persistent-1-build.185aa992c49ad310 - test-oadp-122/django-psql-persistent-1-build.185aa992c516c787 - test-oadp-122/django-psql-persistent-1-build.185aa992e7379a9d - test-oadp-122/django-psql-persistent-1-build.185aa992f0e598c9 - test-oadp-122/django-psql-persistent-1-build.185aa992f15a7250 - test-oadp-122/django-psql-persistent-1-build.185aa993c7a87309 - test-oadp-122/django-psql-persistent-1-build.185aa993d23bec09 - test-oadp-122/django-psql-persistent-1-build.185aa993d2cf820e - test-oadp-122/django-psql-persistent-1-deploy.185aa99dad588798 - test-oadp-122/django-psql-persistent-1-deploy.185aa99dd58ca014 - test-oadp-122/django-psql-persistent-1-deploy.185aa99dd6edd68f - test-oadp-122/django-psql-persistent-1-deploy.185aa99ddbb4b9c9 - test-oadp-122/django-psql-persistent-1-deploy.185aa99ddc284dbf - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99de3e926aa - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99e09ff03e9 - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99e0b703558 - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99fc99af2cd - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99fcf5bcb1c - test-oadp-122/django-psql-persistent-1-vzpbf.185aa99fcfd3f44d - test-oadp-122/django-psql-persistent-1.185aa992e8fb42c0 - test-oadp-122/django-psql-persistent-1.185aa99de3330eb2 - test-oadp-122/django-psql-persistent-1.185aa99df53874ac - test-oadp-122/django-psql-persistent.185aa99daac1dd04 - test-oadp-122/postgresql-1-deploy.185aa991626b2931 - test-oadp-122/postgresql-1-deploy.185aa991891f1aa7 - test-oadp-122/postgresql-1-deploy.185aa9918a5e4524 - test-oadp-122/postgresql-1-deploy.185aa991bce4cc99 - test-oadp-122/postgresql-1-deploy.185aa991c1ad17c5 - test-oadp-122/postgresql-1-deploy.185aa991c2310244 - test-oadp-122/postgresql-1-t9nnd.185aa991c9fb87a4 - test-oadp-122/postgresql-1-t9nnd.185aa991ebf1bf19 - test-oadp-122/postgresql-1-t9nnd.185aa99249e09a85 - test-oadp-122/postgresql-1-t9nnd.185aa9924b322143 - test-oadp-122/postgresql-1-t9nnd.185aa993cc25878c - test-oadp-122/postgresql-1-t9nnd.185aa993d1fe0367 - test-oadp-122/postgresql-1-t9nnd.185aa993d2801dde - test-oadp-122/postgresql-1.185aa991c984f0dc - test-oadp-122/postgresql.185aa9915c9f1b26 - test-oadp-122/postgresql.185aa9915cb0451f - test-oadp-122/postgresql.185aa9915f8bca2e - test-oadp-122/postgresql.185aa99167abcf42 v1/Namespace: - test-oadp-122 v1/PersistentVolume: - pvc-f4c35499-432f-447a-bf59-aa0dfb575da1 v1/PersistentVolumeClaim: - test-oadp-122/postgresql v1/Pod: - test-oadp-122/django-psql-persistent-1-build - test-oadp-122/django-psql-persistent-1-deploy - test-oadp-122/django-psql-persistent-1-vzpbf - test-oadp-122/postgresql-1-deploy - test-oadp-122/postgresql-1-t9nnd v1/ReplicationController: - test-oadp-122/django-psql-persistent-1 - test-oadp-122/postgresql-1 v1/Secret: - test-oadp-122/builder-dockercfg-pgz77 - test-oadp-122/default-dockercfg-6sft6 - test-oadp-122/deployer-dockercfg-nwls2 - test-oadp-122/django-psql-persistent v1/Service: - test-oadp-122/django-psql-persistent - test-oadp-122/postgresql v1/ServiceAccount: - test-oadp-122/builder - test-oadp-122/default - test-oadp-122/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-122/postgresql: Snapshot: Operation ID: test-oadp-122/velero-postgresql-gr47l/2025-08-11T08:38:53Z Snapshot Content Name: snapcontent-1dcb1d1f-ca5c-4de6-a810-5ca0ae6baa05 Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000003-532d4455-ff07-4cf2-a13b-e8fd19f1a95b Snapshot Size (bytes): 1073741824 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 08:39:06.695 2025/08/11 08:39:06 Backup for case django-persistent succeeded STEP: Delete the appplication resources django-persistent @ 08/11/25 08:39:06.743 STEP: Cleanup Application for case django-persistent @ 08/11/25 08:39:06.743 2025/08/11 08:39:06 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace test-oadp-122] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025/08/11 08:39:36 2025-08-11 08:39:08,261 p=45242 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:39:08,261 p=45242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:39:08,529 p=45242 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:39:08,530 p=45242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:39:08,803 p=45242 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:39:08,803 p=45242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:39:09,081 p=45242 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:39:09,082 p=45242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:39:09,097 p=45242 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:39:09,097 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:09,116 p=45242 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:39:09,116 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:09,128 p=45242 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:39:09,128 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:39:09,471 p=45242 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:39:09,472 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:09,499 p=45242 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:39:09,499 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:09,516 p=45242 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:39:09,516 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:09,518 p=45242 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:39:10,081 p=45242 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:39:10,082 p=45242 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:39:35,950 p=45242 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace test-oadp-122] *** 2025-08-11 08:39:35,951 p=45242 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:39:35,951 p=45242 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:39:36,306 p=45242 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:39:36,306 p=45242 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025/08/11 08:39:36 Creating restore django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f for case django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f STEP: Create restore django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f from backup django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f @ 08/11/25 08:39:36.38 2025/08/11 08:39:36 Wait until restore django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f is complete restore phase: Finalizing restore phase: Finalizing restore phase: Completed STEP: Verify restore django-persistent-627d891f-768e-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 08:40:06.43 STEP: Verify Application restore @ 08/11/25 08:40:06.432 STEP: Verify Application deployment for case django-persistent @ 08/11/25 08:40:06.432 2025/08/11 08:40:06 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check application pod status (30 retries left). FAILED - RETRYING: [localhost]: Check application pod status (29 retries left). FAILED - RETRYING: [localhost]: Check application pod status (28 retries left). FAILED - RETRYING: [localhost]: Check application pod status (27 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** ok: [localhost] => {  "msg": "PASS: # of visits should be 2; actual 2" } PLAY RECAP ********************************************************************* localhost : ok=22  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/08/11 08:40:35 2025-08-11 08:40:07,964 p=45467 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:40:07,965 p=45467 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:08,205 p=45467 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:40:08,205 p=45467 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:08,445 p=45467 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:40:08,446 p=45467 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:08,702 p=45467 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:40:08,702 p=45467 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:08,717 p=45467 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:40:08,718 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:08,739 p=45467 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:40:08,740 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:08,755 p=45467 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:40:08,755 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:40:09,077 p=45467 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:40:09,077 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:09,104 p=45467 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:40:09,104 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:09,122 p=45467 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:40:09,123 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:09,124 p=45467 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:40:09,668 p=45467 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:40:09,668 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:10,592 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check postgresql pod status] *** 2025-08-11 08:40:10,592 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:11,238 p=45467 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (30 retries left). 2025-08-11 08:40:16,849 p=45467 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (29 retries left). 2025-08-11 08:40:22,496 p=45467 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (28 retries left). 2025-08-11 08:40:28,144 p=45467 u=1002120000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check application pod status (27 retries left). 2025-08-11 08:40:33,777 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Check application pod status] *** 2025-08-11 08:40:33,777 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:34,806 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get route] *** 2025-08-11 08:40:34,806 p=45467 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:40:34,806 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:35,227 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Access the html file] *** 2025-08-11 08:40:35,227 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:35,248 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : set_fact] *** 2025-08-11 08:40:35,248 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:35,595 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Get num visits up to now] *** 2025-08-11 08:40:35,595 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:35,634 p=45467 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Print num of visits] *** 2025-08-11 08:40:35,634 p=45467 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "PASS: # of visits should be 2; actual 2" } 2025-08-11 08:40:35,638 p=45467 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:40:35,638 p=45467 u=1002120000 n=ansible INFO| localhost : ok=22 changed=4 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-122] [interop] [skip-disconnected] Django application with BSL&CSI @ 08/11/25 08:40:35.683 (3m19.853s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:40:35.683 2025/08/11 08:40:35 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:40:35.684 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:40:35.684 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:40:35.688 (4ms) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:40:35.688 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:40:35.688 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:40:35.688 2025/08/11 08:40:35 Reset number of visits to 0 2025/08/11 08:40:35 Cleaning app 2025/08/11 08:40:35 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace test-oadp-122] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025/08/11 08:41:04 2025-08-11 08:40:37,100 p=45799 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:40:37,101 p=45799 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:37,338 p=45799 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:40:37,339 p=45799 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:37,581 p=45799 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:40:37,581 p=45799 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:37,841 p=45799 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:40:37,841 p=45799 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:40:37,856 p=45799 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:40:37,856 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:37,875 p=45799 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:40:37,875 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:37,887 p=45799 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:40:37,888 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:40:38,209 p=45799 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:40:38,209 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:38,237 p=45799 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:40:38,237 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:38,257 p=45799 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:40:38,257 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:40:38,259 p=45799 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:40:38,831 p=45799 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:40:38,831 p=45799 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:04,616 p=45799 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-django : Remove namespace test-oadp-122] *** 2025-08-11 08:41:04,617 p=45799 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:41:04,617 p=45799 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:04,920 p=45799 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:41:04,920 p=45799 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:04.963 (29.275s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:04.963 2025/08/11 08:41:04 Cleaning setup resources for the backup 2025/08/11 08:41:04 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/08/11 08:41:04 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/08/11 08:41:04 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:04.984 (21ms) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:04.984 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:04.989 (5ms) • [229.166 seconds] ------------------------------ SS ------------------------------ Backup restore tests Application backup [tc-id:OADP-352][interop][skip-disconnected][smoke] Django application with BSL&VSL [vsl] /alabama/cspi/e2e/app_backup/backup_restore.go:145 > Enter [BeforeEach] Backup restore tests @ 08/11/25 08:41:04.99 < Exit [BeforeEach] Backup restore tests @ 08/11/25 08:41:04.998 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:41:04.998 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:41:04.998 (0s) > Enter [It] [tc-id:OADP-352][interop][skip-disconnected][smoke] Django application with BSL&VSL @ 08/11/25 08:41:04.998 2025/08/11 08:41:04 Check if VSL custom credentials exist 2025/08/11 08:41:05 Check if the cloud provider is AWS 2025/08/11 08:41:05 Delete all downloadrequest django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f-7b0252e1-1088-497b-ac5d-0fc72a4e5859 django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f-adfee4ca-27c3-4f71-aba8-63dcb6827b98 django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f-b7525954-af2c-4e1b-b46b-0c6e92fc8ddf STEP: Create DPA CR @ 08/11/25 08:41:05.081 2025/08/11 08:41:05 vsl 2025/08/11 08:41:05 Check if VSL custom credentials exist 2025/08/11 08:41:05 Check if the cloud provider is AWS 2025/08/11 08:41:05 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "6180a964-0165-481c-8605-4bda70ad3b17", "resourceVersion": "133975", "generation": 1, "creationTimestamp": "2025-08-11T08:41:05Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:41:05Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:41:05.117 2025/08/11 08:41:05 Waiting for velero pod to be running 2025/08/11 08:41:05 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 08:41:05 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "6180a964-0165-481c-8605-4bda70ad3b17", "resourceVersion": "133975", "generation": 1, "creationTimestamp": "2025-08-11T08:41:05Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:41:05Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:41:10.135 2025/08/11 08:41:10 Checking default storage class count [SKIPPED] in [It] - /alabama/cspi/lib/backup.go:404 @ 08/11/25 08:41:10.148 < Exit [It] [tc-id:OADP-352][interop][skip-disconnected][smoke] Django application with BSL&VSL @ 08/11/25 08:41:10.148 (5.15s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:41:10.148 2025/08/11 08:41:10 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:41:10.148 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:10.148 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:41:10.157 (9ms) S [SKIPPED] [5.168 seconds] Backup restore tests Application backup [It] [tc-id:OADP-352][interop][skip-disconnected][smoke] Django application with BSL&VSL [vsl] /alabama/cspi/e2e/app_backup/backup_restore.go:145 [SKIPPED] Skipping VSL test because the default StorageClass provisioner openshift-storage.rbd.csi.ceph.com is not supported In [It] at: /alabama/cspi/lib/backup.go:404 @ 08/11/25 08:41:10.148 ------------------------------ SS ------------------------------ Backup restore tests Application backup [tc-id:OADP-97][interop] Empty-project application with Restic /alabama/cspi/e2e/app_backup/backup_restore.go:191 > Enter [BeforeEach] Backup restore tests @ 08/11/25 08:41:10.158 < Exit [BeforeEach] Backup restore tests @ 08/11/25 08:41:10.164 (7ms) > Enter [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:41:10.164 < Exit [JustBeforeEach] TOP-LEVEL @ 08/11/25 08:41:10.164 (0s) > Enter [It] [tc-id:OADP-97][interop] Empty-project application with Restic @ 08/11/25 08:41:10.164 2025/08/11 08:41:10 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 08/11/25 08:41:10.185 2025/08/11 08:41:10 restic 2025/08/11 08:41:10 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "0ac859b2-c959-4a8d-936a-86b538e99beb", "resourceVersion": "134152", "generation": 1, "creationTimestamp": "2025-08-11T08:41:10Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:41:10Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 08/11/25 08:41:10.225 2025/08/11 08:41:10 Waiting for velero pod to be running 2025/08/11 08:41:10 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/08/11 08:41:10 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "0ac859b2-c959-4a8d-936a-86b538e99beb", "resourceVersion": "134152", "generation": 1, "creationTimestamp": "2025-08-11T08:41:10Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-08-11T08:41:10Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-6fip6j15-interopoadp", "prefix": "velero-e2e-04fccd2c-7688-11f0-aa2b-0a580a83369f" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 08/11/25 08:41:15.245 2025/08/11 08:41:15 Checking for correct number of running NodeAgent pods... STEP: Installing application for case empty-project-e2e @ 08/11/25 08:41:15.255 2025/08/11 08:41:15 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Deploy project with labels and selectors] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 08:41:19 2025-08-11 08:41:16,732 p=46026 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:41:16,732 p=46026 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:16,997 p=46026 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:41:16,997 p=46026 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:17,261 p=46026 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:41:17,262 p=46026 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:17,506 p=46026 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:41:17,506 p=46026 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:17,520 p=46026 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:41:17,520 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:17,538 p=46026 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:41:17,538 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:17,551 p=46026 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:41:17,551 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:41:17,851 p=46026 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:41:17,851 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:17,880 p=46026 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:41:17,881 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:17,899 p=46026 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:41:17,899 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:17,901 p=46026 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:41:18,455 p=46026 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:41:18,456 p=46026 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:19,289 p=46026 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Deploy project with labels and selectors] *** 2025-08-11 08:41:19,290 p=46026 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:41:19,290 p=46026 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:19,325 p=46026 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:41:19,325 p=46026 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 08/11/25 08:41:19.376 2025/08/11 08:41:19 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 08:41:23 2025-08-11 08:41:20,890 p=46239 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:41:20,891 p=46239 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:21,153 p=46239 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:41:21,154 p=46239 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:21,434 p=46239 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:41:21,435 p=46239 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:21,682 p=46239 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:41:21,682 p=46239 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:21,697 p=46239 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:41:21,698 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:21,716 p=46239 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:41:21,716 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:21,729 p=46239 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:41:21,729 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:41:22,077 p=46239 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:41:22,077 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:22,103 p=46239 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:41:22,104 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:22,121 p=46239 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:41:22,121 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:22,122 p=46239 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:41:22,670 p=46239 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:41:22,670 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:23,511 p=46239 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** 2025-08-11 08:41:23,511 p=46239 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:41:23,511 p=46239 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:23,534 p=46239 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:41:23,534 p=46239 u=1002120000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/08/11 08:41:23 {{ } { } []} STEP: Creating backup empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f @ 08/11/25 08:41:23.583 2025/08/11 08:41:23 Wait until backup empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f is completed backup phase: Completed STEP: Verify backup empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f has completed successfully @ 08/11/25 08:41:43.601 2025/08/11 08:41:43 Backup for case empty-project-e2e succeeded STEP: Delete the appplication resources empty-project-e2e @ 08/11/25 08:41:43.639 STEP: Cleanup Application for case empty-project-e2e @ 08/11/25 08:41:43.639 2025/08/11 08:41:43 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 08:41:58 2025-08-11 08:41:45,243 p=46452 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:41:45,243 p=46452 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:45,516 p=46452 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:41:45,516 p=46452 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:45,804 p=46452 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:41:45,804 p=46452 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:46,077 p=46452 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:41:46,077 p=46452 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:46,093 p=46452 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:41:46,094 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:46,112 p=46452 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:41:46,112 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:46,125 p=46452 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:41:46,125 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:41:46,474 p=46452 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:41:46,474 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:46,503 p=46452 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:41:46,503 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:46,524 p=46452 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:41:46,524 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:46,525 p=46452 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:41:47,100 p=46452 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:41:47,100 p=46452 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:41:57,904 p=46452 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** 2025-08-11 08:41:57,905 p=46452 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:41:57,905 p=46452 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:41:57,966 p=46452 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:41:57,966 p=46452 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/08/11 08:41:58 Creating restore empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f for case empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f STEP: Create restore empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f from backup empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f @ 08/11/25 08:41:58.012 2025/08/11 08:41:58 Wait until restore empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f is complete restore phase: Completed 2025/08/11 08:42:08 No PodVolumeBackup CR found for the Restore STEP: Verify restore empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369fhas completed successfully @ 08/11/25 08:42:08.057 STEP: Verify Application restore @ 08/11/25 08:42:08.061 STEP: Verify Application deployment for case empty-project-e2e @ 08/11/25 08:42:08.061 2025/08/11 08:42:08 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 08:42:12 2025-08-11 08:42:09,671 p=46664 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:42:09,671 p=46664 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:09,958 p=46664 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:42:09,958 p=46664 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:10,260 p=46664 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:42:10,260 p=46664 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:10,570 p=46664 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:42:10,571 p=46664 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:10,586 p=46664 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:42:10,587 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:10,606 p=46664 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:42:10,607 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:10,621 p=46664 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:42:10,621 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:42:11,015 p=46664 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:42:11,016 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:11,051 p=46664 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:42:11,052 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:11,075 p=46664 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:42:11,075 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:11,077 p=46664 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:42:11,691 p=46664 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:42:11,691 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:12,603 p=46664 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** 2025-08-11 08:42:12,604 p=46664 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:42:12,604 p=46664 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:12,625 p=46664 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:42:12,625 p=46664 u=1002120000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-97][interop] Empty-project application with Restic @ 08/11/25 08:42:12.665 (1m2.501s) > Enter [JustAfterEach] TOP-LEVEL @ 08/11/25 08:42:12.665 2025/08/11 08:42:12 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 08/11/25 08:42:12.665 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:12.665 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:12.669 (4ms) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:12.669 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:12.669 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:12.669 2025/08/11 08:42:12 Cleaning app 2025/08/11 08:42:12 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/08/11 08:42:26 2025-08-11 08:42:14,212 p=46874 u=1002120000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-08-11 08:42:14,212 p=46874 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:14,522 p=46874 u=1002120000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-08-11 08:42:14,522 p=46874 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:14,789 p=46874 u=1002120000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-08-11 08:42:14,789 p=46874 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:15,050 p=46874 u=1002120000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-08-11 08:42:15,050 p=46874 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:15,064 p=46874 u=1002120000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-08-11 08:42:15,064 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:15,082 p=46874 u=1002120000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-08-11 08:42:15,082 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:15,094 p=46874 u=1002120000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-08-11 08:42:15,094 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~DeNK-r31osfYXTqVI2uua-eHpiwW5DvSukWTl9NPs7o" } 2025-08-11 08:42:15,411 p=46874 u=1002120000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-08-11 08:42:15,412 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:15,441 p=46874 u=1002120000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-08-11 08:42:15,441 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:15,462 p=46874 u=1002120000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-08-11 08:42:15,462 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:15,464 p=46874 u=1002120000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-08-11 08:42:16,019 p=46874 u=1002120000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-08-11 08:42:16,019 p=46874 u=1002120000 n=ansible INFO| ok: [localhost] 2025-08-11 08:42:26,849 p=46874 u=1002120000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** 2025-08-11 08:42:26,849 p=46874 u=1002120000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-08-11 08:42:26,849 p=46874 u=1002120000 n=ansible INFO| changed: [localhost] 2025-08-11 08:42:26,914 p=46874 u=1002120000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-08-11 08:42:26,914 p=46874 u=1002120000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:26.958 (14.289s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:26.958 2025/08/11 08:42:26 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:26.958 (0s) > Enter [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:26.958 < Exit [DeferCleanup (Each)] Application backup @ 08/11/25 08:42:26.966 (8ms) • [76.809 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite]  /alabama/cspi/e2e/e2e_suite_test.go:230 > Enter [SynchronizedAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.966 2025/08/11 08:42:26 Deleting Velero CR < Exit [SynchronizedAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.973 (6ms) > Enter [SynchronizedAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.973 < Exit [SynchronizedAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.973 (0s) [SynchronizedAfterSuite] PASSED [0.006 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo > Enter [ReportAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.973 < Exit [ReportAfterSuite] TOP-LEVEL @ 08/11/25 08:42:26.986 (13ms) [ReportAfterSuite] PASSED [0.013 seconds] ------------------------------ Summarizing 2 Failures: [FAIL] [datamover] DataMover: Backup/Restore stateful application with CSI  [It] [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/test_common/backup_restore_app_case.go:46 [FAIL] Backup hooks tests Pre exec hook [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/test_common/backup_restore_app_case.go:46 Ran 9 of 227 Specs in 3045.002 seconds FAIL! -- 7 Passed | 2 Failed | 0 Pending | 218 Skipped --- FAIL: TestOADPE2E (3045.13s) FAIL Ginkgo ran 1 suite in 50m55.135278102s Test Suite Failed [must-gather ] OUT 2025-08-11T08:42:57.085721954Z Using must-gather plug-in image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: cbda4714-fb9a-4786-bbb6-8eb5fbf3394a ClientVersion: 4.17.10 ClusterVersion: Stable at "4.20.0-0.nightly-2025-07-31-063120" ClusterOperators: clusteroperator/operator-lifecycle-manager is not upgradeable because ClusterServiceVersions blocking minor version upgrades to 4.21.0 or higher: - maximum supported OCP version for openshift-storage/odf-dependencies.v4.19.1-rhodf is 4.20 - maximum supported OCP version for openshift-storage/odf-operator.v4.19.1-rhodf is 4.20 [must-gather ] OUT 2025-08-11T08:42:57.116105984Z namespace/openshift-must-gather-5989m created [must-gather ] OUT 2025-08-11T08:42:57.122985652Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-82pmn created Warning: spec.nodeSelector[node-role.kubernetes.io/master]: use "node-role.kubernetes.io/control-plane" instead [must-gather ] OUT 2025-08-11T08:42:57.252936517Z pod for plug-in image registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 created [must-gather-kbswd] POD 2025-08-11T08:43:08.061316290Z volume percentage checker started..... [must-gather-kbswd] POD 2025-08-11T08:43:08.070467899Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:08.767522937Z W0811 08:43:08.767484 3 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ [must-gather-kbswd] POD 2025-08-11T08:43:08.799887714Z W0811 08:43:08.799850 3 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2. [must-gather-kbswd] POD 2025-08-11T08:43:09.234766880Z W0811 08:43:09.234721 3 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice [must-gather-kbswd] POD 2025-08-11T08:43:11.641119407Z W0811 08:43:11.641057 3 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ [must-gather-kbswd] POD 2025-08-11T08:43:11.672191080Z W0811 08:43:11.672160 3 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2. [must-gather-kbswd] POD 2025-08-11T08:43:12.226042439Z W0811 08:43:12.226001 3 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice [must-gather-kbswd] POD 2025-08-11T08:43:13.080441308Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:18.090110121Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:22.613667864Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f-163eace0-7021-4273-863f-795bee888893": context deadline exceeded [must-gather-kbswd] POD 2025-08-11T08:43:23.099590072Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:28.108906993Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:32.614885197Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:43:33.119679257Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:38.130045054Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:42.616514523Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:43:43.139184723Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:48.157586816Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:52.618567915Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/mysql-9582604b-7688-11f0-aa2b-0a580a83369f-28f779fc-3605-4134-8981-df65f13089e7": context deadline exceeded [must-gather-kbswd] POD 2025-08-11T08:43:53.167977202Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:43:58.177362397Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:02.620865250Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:44:03.186864071Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:08.196781533Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:12.622518629Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:44:13.209597798Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:18.219720339Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:22.624627136Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:44:23.231861856Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:28.241601180Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:32.626151410Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f-e40712ba-f333-4f78-9fa0-551c15400ed8": context deadline exceeded [must-gather-kbswd] POD 2025-08-11T08:44:33.250868470Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:38.260412267Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:42.634358353Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:44:43.269982584Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:48.279531521Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:52.636237473Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:44:53.288790144Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:44:58.299066496Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:02.638703529Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:45:03.308725730Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:08.318323449Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:12.639780622Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:45:13.328040529Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:18.337256661Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:22.641579935Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f-3a8dd558-b07b-4ae8-a2a2-93cb9cdb607a": context deadline exceeded [must-gather-kbswd] POD 2025-08-11T08:45:23.346861117Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:28.357250559Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:32.643645936Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:45:33.366816884Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:38.376416336Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:42.644903626Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:45:43.385599562Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:48.396293697Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:52.646302306Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:45:53.406138956Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:45:58.416992414Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:02.648475680Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:03.427133698Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:08.437841234Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:12.650658314Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:13.448036508Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:18.457881106Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:22.652128535Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:23.468110850Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:28.480191851Z volume usage percentage 0 [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] POD 2025-08-11T08:46:32.653697518Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-kbswd] OUT 2025-08-11T08:46:34.062761453Z waiting for gather to complete [must-gather-kbswd] OUT 2025-08-11T08:46:34.262332736Z downloading gather output [must-gather-kbswd] OUT 2025-08-11T08:46:34.843091229Z receiving incremental file list [must-gather-kbswd] OUT 2025-08-11T08:46:34.852367249Z ./ [must-gather-kbswd] OUT 2025-08-11T08:46:34.852415939Z version [must-gather-kbswd] OUT 2025-08-11T08:46:34.868263776Z clusters/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.868282177Z clusters/cbda4714/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.868318127Z clusters/cbda4714/event-filter.html [must-gather-kbswd] OUT 2025-08-11T08:46:34.87002809Z clusters/cbda4714/oadp-must-gather-summary.md [must-gather-kbswd] OUT 2025-08-11T08:46:34.870199714Z clusters/cbda4714/timestamp [must-gather-kbswd] OUT 2025-08-11T08:46:34.870245195Z clusters/cbda4714/cluster-scoped-resources/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.870254205Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.870259455Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.870310496Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backuprepositories.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.870437428Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backups.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.870665483Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backupstoragelocations.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.870790165Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/cloudstorages.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.870904557Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/clusterserviceversions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.872893696Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/datadownloads.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.873082199Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/dataprotectionapplications.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.87364651Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/datauploads.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.873779143Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/deletebackuprequests.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.873893935Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/downloadrequests.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.874011887Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/podvolumebackups.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.87414199Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/podvolumerestores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.874279753Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/restores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.874448196Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/schedules.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.87466663Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/serverstatusrequests.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.874774302Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/subscriptions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.875458815Z clusters/cbda4714/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotlocations.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.875569018Z clusters/cbda4714/cluster-scoped-resources/config.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.875636939Z clusters/cbda4714/cluster-scoped-resources/config.openshift.io/clusterversions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.875745111Z clusters/cbda4714/cluster-scoped-resources/migrations.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.875801812Z clusters/cbda4714/cluster-scoped-resources/migrations.kubevirt.io/migrationpolicies.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.875861813Z clusters/cbda4714/cluster-scoped-resources/snapshot.storage.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.875872953Z clusters/cbda4714/cluster-scoped-resources/snapshot.storage.k8s.io/volumesnapshotclasses/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.875911734Z clusters/cbda4714/cluster-scoped-resources/snapshot.storage.k8s.io/volumesnapshotclasses/volumesnapshotclasses.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.875992276Z clusters/cbda4714/cluster-scoped-resources/storage.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876000746Z clusters/cbda4714/cluster-scoped-resources/storage.k8s.io/csidrivers/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876043177Z clusters/cbda4714/cluster-scoped-resources/storage.k8s.io/csidrivers/csidrivers.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.876110068Z clusters/cbda4714/cluster-scoped-resources/storage.k8s.io/storageclasses/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876173219Z clusters/cbda4714/cluster-scoped-resources/storage.k8s.io/storageclasses/storageclasses.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.876252941Z clusters/cbda4714/namespaces/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876262921Z clusters/cbda4714/namespaces/openshift-adp/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876308582Z clusters/cbda4714/namespaces/openshift-adp/openshift-adp.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.876434054Z clusters/cbda4714/namespaces/openshift-adp/apps.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876497065Z clusters/cbda4714/namespaces/openshift-adp/apps.openshift.io/deploymentconfigs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.876654879Z clusters/cbda4714/namespaces/openshift-adp/apps/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.876792801Z clusters/cbda4714/namespaces/openshift-adp/apps/daemonsets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.876913574Z clusters/cbda4714/namespaces/openshift-adp/apps/deployments.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877128908Z clusters/cbda4714/namespaces/openshift-adp/apps/replicasets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877308591Z clusters/cbda4714/namespaces/openshift-adp/apps/statefulsets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877386173Z clusters/cbda4714/namespaces/openshift-adp/autoscaling/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.877452054Z clusters/cbda4714/namespaces/openshift-adp/autoscaling/horizontalpodautoscalers.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877600937Z clusters/cbda4714/namespaces/openshift-adp/batch/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.877678638Z clusters/cbda4714/namespaces/openshift-adp/batch/cronjobs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877805871Z clusters/cbda4714/namespaces/openshift-adp/batch/jobs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.877856262Z clusters/cbda4714/namespaces/openshift-adp/build.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.877945924Z clusters/cbda4714/namespaces/openshift-adp/build.openshift.io/buildconfigs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.878071866Z clusters/cbda4714/namespaces/openshift-adp/build.openshift.io/builds.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.878135027Z clusters/cbda4714/namespaces/openshift-adp/cdi.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.878219839Z clusters/cbda4714/namespaces/openshift-adp/cdi.kubevirt.io/dataimportcrons.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.878390692Z clusters/cbda4714/namespaces/openshift-adp/cdi.kubevirt.io/datasources.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.878507824Z clusters/cbda4714/namespaces/openshift-adp/cdi.kubevirt.io/datavolumes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.878599826Z clusters/cbda4714/namespaces/openshift-adp/clone.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.878672568Z clusters/cbda4714/namespaces/openshift-adp/clone.kubevirt.io/virtualmachineclones.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.87879109Z clusters/cbda4714/namespaces/openshift-adp/core/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.878869391Z clusters/cbda4714/namespaces/openshift-adp/core/configmaps.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.879033155Z clusters/cbda4714/namespaces/openshift-adp/core/endpoints.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.879166507Z clusters/cbda4714/namespaces/openshift-adp/core/events.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.881516233Z clusters/cbda4714/namespaces/openshift-adp/core/persistentvolumeclaims.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.881662276Z clusters/cbda4714/namespaces/openshift-adp/core/pods.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.881852779Z clusters/cbda4714/namespaces/openshift-adp/core/replicationcontrollers.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882015842Z clusters/cbda4714/namespaces/openshift-adp/core/secrets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882230556Z clusters/cbda4714/namespaces/openshift-adp/core/services.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882303298Z clusters/cbda4714/namespaces/openshift-adp/discovery.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.88237777Z clusters/cbda4714/namespaces/openshift-adp/discovery.k8s.io/endpointslices.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882460411Z clusters/cbda4714/namespaces/openshift-adp/export.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.882548083Z clusters/cbda4714/namespaces/openshift-adp/export.kubevirt.io/virtualmachineexports.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882629814Z clusters/cbda4714/namespaces/openshift-adp/hco.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.882698876Z clusters/cbda4714/namespaces/openshift-adp/hco.kubevirt.io/hyperconvergeds.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.882761927Z clusters/cbda4714/namespaces/openshift-adp/image.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.882839768Z clusters/cbda4714/namespaces/openshift-adp/image.openshift.io/imagestreams.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.88290463Z clusters/cbda4714/namespaces/openshift-adp/instancetype.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.882978041Z clusters/cbda4714/namespaces/openshift-adp/instancetype.kubevirt.io/virtualmachineinstancetypes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883111654Z clusters/cbda4714/namespaces/openshift-adp/instancetype.kubevirt.io/virtualmachinepreferences.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883181515Z clusters/cbda4714/namespaces/openshift-adp/k8s.ovn.org/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.883253116Z clusters/cbda4714/namespaces/openshift-adp/k8s.ovn.org/egressfirewalls.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883374439Z clusters/cbda4714/namespaces/openshift-adp/k8s.ovn.org/egressqoses.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.88344154Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.883508021Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/kubevirts.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883654564Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancemigrations.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883778466Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancepresets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.883897579Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancereplicasets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884072952Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/virtualmachineinstances.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884201905Z clusters/cbda4714/namespaces/openshift-adp/kubevirt.io/virtualmachines.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884273596Z clusters/cbda4714/namespaces/openshift-adp/monitoring.coreos.com/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.884348168Z clusters/cbda4714/namespaces/openshift-adp/monitoring.coreos.com/servicemonitors.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884421459Z clusters/cbda4714/namespaces/openshift-adp/networking.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.88449189Z clusters/cbda4714/namespaces/openshift-adp/networking.k8s.io/networkpolicies.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884604552Z clusters/cbda4714/namespaces/openshift-adp/operators.coreos.com/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.884615513Z clusters/cbda4714/namespaces/openshift-adp/operators.coreos.com/clusterserviceversions/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.884677404Z clusters/cbda4714/namespaces/openshift-adp/operators.coreos.com/clusterserviceversions/clusterserviceversions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.884960859Z clusters/cbda4714/namespaces/openshift-adp/operators.coreos.com/subscriptions/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885027571Z clusters/cbda4714/namespaces/openshift-adp/operators.coreos.com/subscriptions/subscriptions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.885113052Z clusters/cbda4714/namespaces/openshift-adp/pods/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885123492Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885186994Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/openshift-adp-controller-manager-5c466f74-6dtbw.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.885325436Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885335977Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885341297Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/manager/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.885410258Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/manager/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.885954479Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/manager/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.886075221Z clusters/cbda4714/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-6dtbw/manager/manager/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.886118282Z clusters/cbda4714/namespaces/openshift-adp/policy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.886189793Z clusters/cbda4714/namespaces/openshift-adp/policy/poddisruptionbudgets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.886250654Z clusters/cbda4714/namespaces/openshift-adp/pool.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.886318806Z clusters/cbda4714/namespaces/openshift-adp/pool.kubevirt.io/virtualmachinepools.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.886382907Z clusters/cbda4714/namespaces/openshift-adp/route.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.886458258Z clusters/cbda4714/namespaces/openshift-adp/route.openshift.io/routes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.88654247Z clusters/cbda4714/namespaces/openshift-adp/snapshot.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.886628942Z clusters/cbda4714/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinerestores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.886757564Z clusters/cbda4714/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinesnapshotcontents.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.886875976Z clusters/cbda4714/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinesnapshots.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.886940858Z clusters/cbda4714/namespaces/openshift-adp/velero.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.886948788Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backuprepositories/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.887014779Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backuprepositories/backuprepositories.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.887113531Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.887184632Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/backups.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.887352706Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.887487128Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.887690432Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.887845395Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-mysql-9582604b-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.887973538Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.88810859Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.888237953Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.888372485Z clusters/cbda4714/namespaces/openshift-adp/velero.io/backups/describe-todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.888441517Z clusters/cbda4714/namespaces/openshift-adp/velero.io/datadownloads/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.888545679Z clusters/cbda4714/namespaces/openshift-adp/velero.io/datadownloads/datadownloads.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.888713382Z clusters/cbda4714/namespaces/openshift-adp/velero.io/datauploads/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.888774263Z clusters/cbda4714/namespaces/openshift-adp/velero.io/datauploads/datauploads.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.888854795Z clusters/cbda4714/namespaces/openshift-adp/velero.io/downloadrequests/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.888912526Z clusters/cbda4714/namespaces/openshift-adp/velero.io/downloadrequests/downloadrequests.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.889050079Z clusters/cbda4714/namespaces/openshift-adp/velero.io/podvolumebackups/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.88911046Z clusters/cbda4714/namespaces/openshift-adp/velero.io/podvolumebackups/podvolumebackups.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.889237572Z clusters/cbda4714/namespaces/openshift-adp/velero.io/podvolumerestores/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.889296153Z clusters/cbda4714/namespaces/openshift-adp/velero.io/podvolumerestores/podvolumerestores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.889391565Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.889446136Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-django-persistent-627d891f-768e-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.889595379Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-empty-project-e2e-ee29ffc3-768e-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.889756412Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-mysql-7dbbd427-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.889866014Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-mysql-9582604b-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.889975926Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-mysql-f0a6b867-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890084479Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-mysql-hooks-e2e-07592143-768d-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890192521Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-ocp-datavolume-ad7ddbf7-7686-11f0-8ee3-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890299423Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-5adfc177-7686-11f0-8ee3-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890405485Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-d3c2877a-7685-11f0-8ee3-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890511417Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-f5811afb-7686-11f0-8ee3-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890648699Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-todolist-backup-323e437f-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890758792Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/describe-todolist-backup-44acdcad-7688-11f0-aa2b-0a580a83369f.txt [must-gather-kbswd] OUT 2025-08-11T08:46:34.890863544Z clusters/cbda4714/namespaces/openshift-adp/velero.io/restores/restores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.890975996Z clusters/cbda4714/namespaces/openshift-cnv/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.891024477Z clusters/cbda4714/namespaces/openshift-cnv/openshift-cnv.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.891096808Z clusters/cbda4714/namespaces/openshift-cnv/apps.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.891156959Z clusters/cbda4714/namespaces/openshift-cnv/apps.openshift.io/deploymentconfigs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.89121042Z clusters/cbda4714/namespaces/openshift-cnv/apps/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.891265251Z clusters/cbda4714/namespaces/openshift-cnv/apps/daemonsets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.891448425Z clusters/cbda4714/namespaces/openshift-cnv/apps/deployments.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.892312822Z clusters/cbda4714/namespaces/openshift-cnv/apps/replicasets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893178608Z clusters/cbda4714/namespaces/openshift-cnv/apps/statefulsets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893223129Z clusters/cbda4714/namespaces/openshift-cnv/autoscaling/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.89327344Z clusters/cbda4714/namespaces/openshift-cnv/autoscaling/horizontalpodautoscalers.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893327601Z clusters/cbda4714/namespaces/openshift-cnv/batch/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.893383312Z clusters/cbda4714/namespaces/openshift-cnv/batch/cronjobs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893536575Z clusters/cbda4714/namespaces/openshift-cnv/batch/jobs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893596356Z clusters/cbda4714/namespaces/openshift-cnv/build.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.893654878Z clusters/cbda4714/namespaces/openshift-cnv/build.openshift.io/buildconfigs.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.8937531Z clusters/cbda4714/namespaces/openshift-cnv/build.openshift.io/builds.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.89379928Z clusters/cbda4714/namespaces/openshift-cnv/cdi.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.893855521Z clusters/cbda4714/namespaces/openshift-cnv/cdi.kubevirt.io/dataimportcrons.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.893957823Z clusters/cbda4714/namespaces/openshift-cnv/cdi.kubevirt.io/datasources.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.894054636Z clusters/cbda4714/namespaces/openshift-cnv/cdi.kubevirt.io/datavolumes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.894108297Z clusters/cbda4714/namespaces/openshift-cnv/clone.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.894159187Z clusters/cbda4714/namespaces/openshift-cnv/clone.kubevirt.io/virtualmachineclones.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.894204858Z clusters/cbda4714/namespaces/openshift-cnv/core/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.894262629Z clusters/cbda4714/namespaces/openshift-cnv/core/configmaps.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.896046264Z clusters/cbda4714/namespaces/openshift-cnv/core/endpoints.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.896276048Z clusters/cbda4714/namespaces/openshift-cnv/core/events.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.897697946Z clusters/cbda4714/namespaces/openshift-cnv/core/persistentvolumeclaims.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.897804718Z clusters/cbda4714/namespaces/openshift-cnv/core/pods.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.899200775Z clusters/cbda4714/namespaces/openshift-cnv/core/replicationcontrollers.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.899330007Z clusters/cbda4714/namespaces/openshift-cnv/core/secrets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.899832277Z clusters/cbda4714/namespaces/openshift-cnv/core/services.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.8999843Z clusters/cbda4714/namespaces/openshift-cnv/discovery.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900036581Z clusters/cbda4714/namespaces/openshift-cnv/discovery.k8s.io/endpointslices.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.900235235Z clusters/cbda4714/namespaces/openshift-cnv/export.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900291716Z clusters/cbda4714/namespaces/openshift-cnv/export.kubevirt.io/virtualmachineexports.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.900346017Z clusters/cbda4714/namespaces/openshift-cnv/hco.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900393318Z clusters/cbda4714/namespaces/openshift-cnv/hco.kubevirt.io/hyperconvergeds.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.90051099Z clusters/cbda4714/namespaces/openshift-cnv/image.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900593092Z clusters/cbda4714/namespaces/openshift-cnv/image.openshift.io/imagestreams.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.900646243Z clusters/cbda4714/namespaces/openshift-cnv/instancetype.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900699514Z clusters/cbda4714/namespaces/openshift-cnv/instancetype.kubevirt.io/virtualmachineinstancetypes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.900793316Z clusters/cbda4714/namespaces/openshift-cnv/instancetype.kubevirt.io/virtualmachinepreferences.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.900843097Z clusters/cbda4714/namespaces/openshift-cnv/k8s.ovn.org/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.900899208Z clusters/cbda4714/namespaces/openshift-cnv/k8s.ovn.org/egressfirewalls.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.90099351Z clusters/cbda4714/namespaces/openshift-cnv/k8s.ovn.org/egressqoses.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.9010377Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.901094562Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/kubevirts.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901251295Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancemigrations.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901345517Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancepresets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901441868Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancereplicasets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901622292Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstances.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901730794Z clusters/cbda4714/namespaces/openshift-cnv/kubevirt.io/virtualmachines.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901778215Z clusters/cbda4714/namespaces/openshift-cnv/monitoring.coreos.com/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.901837236Z clusters/cbda4714/namespaces/openshift-cnv/monitoring.coreos.com/servicemonitors.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.901915748Z clusters/cbda4714/namespaces/openshift-cnv/networking.k8s.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.901968758Z clusters/cbda4714/namespaces/openshift-cnv/networking.k8s.io/networkpolicies.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.90201647Z clusters/cbda4714/namespaces/openshift-cnv/operators.coreos.com/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.90202477Z clusters/cbda4714/namespaces/openshift-cnv/operators.coreos.com/clusterserviceversions/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.902076491Z clusters/cbda4714/namespaces/openshift-cnv/operators.coreos.com/clusterserviceversions/clusterserviceversions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.902735144Z clusters/cbda4714/namespaces/openshift-cnv/operators.coreos.com/subscriptions/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.902790065Z clusters/cbda4714/namespaces/openshift-cnv/operators.coreos.com/subscriptions/subscriptions.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.902862496Z clusters/cbda4714/namespaces/openshift-cnv/pods/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.902873006Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.902923437Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator-6f6547cd7b-x759k.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.903023529Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.903033879Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/aaq-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.903039579Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/aaq-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.90309878Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/aaq-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903250983Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/aaq-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903348435Z clusters/cbda4714/namespaces/openshift-cnv/pods/aaq-operator-6f6547cd7b-x759k/aaq-operator/aaq-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903381546Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.903434627Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker-4zjn9.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.903549659Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.90356604Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.90357234Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/bridge-marker/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.903644331Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/bridge-marker/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903743053Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903835545Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-4zjn9/bridge-marker/bridge-marker/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.903872486Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.903933307Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker-dmdgf.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.904000218Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904007678Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904011288Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/bridge-marker/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904064019Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/bridge-marker/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.904165991Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.904261073Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-dmdgf/bridge-marker/bridge-marker/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.904297004Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904351995Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker-q42dw.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.904420226Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904427706Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/bridge-marker/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904431176Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/bridge-marker/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904483237Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/bridge-marker/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.90461557Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.904715422Z clusters/cbda4714/namespaces/openshift-cnv/pods/bridge-marker-q42dw/bridge-marker/bridge-marker/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.904761023Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904810964Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver-864b699b5f-s7t58.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.904892445Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904902665Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/cdi-apiserver/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904913696Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/cdi-apiserver/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.904960007Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/cdi-apiserver/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.905080469Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/cdi-apiserver/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.905173551Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-apiserver-864b699b5f-s7t58/cdi-apiserver/cdi-apiserver/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.905222161Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.905279463Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment-7bb96d5c-qhjck.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.905359584Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.905367144Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/cdi-deployment/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.905370675Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/cdi-deployment/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.905427545Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/cdi-deployment/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.920378655Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/cdi-deployment/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.920481077Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-deployment-7bb96d5c-qhjck/cdi-deployment/cdi-deployment/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.920517298Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.92061663Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator-5c9dc456fb-55nnp.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.920727452Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.920738372Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/cdi-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.920743902Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/cdi-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.920797153Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/cdi-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.92164433Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/cdi-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.921741361Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-operator-5c9dc456fb-55nnp/cdi-operator/cdi-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.921778712Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.921839163Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy-86f64c7f75-bksn4.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.921917195Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.921924395Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/cdi-uploadproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.921928895Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/cdi-uploadproxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.921990466Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/cdi-uploadproxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.922101968Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/cdi-uploadproxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.92220028Z clusters/cbda4714/namespaces/openshift-cnv/pods/cdi-uploadproxy-86f64c7f75-bksn4/cdi-uploadproxy/cdi-uploadproxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.922236031Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.922300752Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator-5ccc4d978-fpk67.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.922450925Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.922460785Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/cluster-network-addons-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.922464915Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/cluster-network-addons-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.922545657Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/cluster-network-addons-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.924285811Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/cluster-network-addons-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.924387572Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/cluster-network-addons-operator/cluster-network-addons-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.924428233Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924439113Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/kube-rbac-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924445134Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/kube-rbac-proxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924496705Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/kube-rbac-proxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.924651588Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/kube-rbac-proxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.92475135Z clusters/cbda4714/namespaces/openshift-cnv/pods/cluster-network-addons-operator-5ccc4d978-fpk67/kube-rbac-proxy/kube-rbac-proxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.92478388Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924840951Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hco-operator-689b8d6f5c-hgxwd.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.924943633Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924950723Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/hyperconverged-cluster-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.924961884Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.925019185Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.931367098Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.93146883Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-operator-689b8d6f5c-hgxwd/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.93148603Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.931581752Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hco-webhook-fc758b8b6-6hr4z.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.931682504Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.931695754Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.931701014Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.931751885Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.931899288Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.93199853Z clusters/cbda4714/namespaces/openshift-cnv/pods/hco-webhook-fc758b8b6-6hr4z/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.932036131Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932091412Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator-56fdf65678-9kgrk.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.932195354Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932203084Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/hostpath-provisioner-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932206634Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932266965Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.932396348Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.932494349Z clusters/cbda4714/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-56fdf65678-9kgrk/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.932561571Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932632312Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/hyperconverged-cluster-cli-download-6b976499f-gl44k.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.932723664Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932731114Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/server/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932735864Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/server/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.932791815Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/server/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.932932388Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/server/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.93303431Z clusters/cbda4714/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-6b976499f-gl44k/server/server/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.933070131Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933131052Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/kube-cni-linux-bridge-plugin-bndl7.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.933205203Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933213074Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933219114Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/cni-plugins/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933278325Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/cni-plugins/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.933393667Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.933489689Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-bndl7/cni-plugins/cni-plugins/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.93355005Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933604341Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/kube-cni-linux-bridge-plugin-gh65h.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.933683432Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933691233Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933694993Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/cni-plugins/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.933749654Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/cni-plugins/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.933863356Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.933960178Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-gh65h/cni-plugins/cni-plugins/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.934001129Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93405906Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/kube-cni-linux-bridge-plugin-pcvsr.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.934130751Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934137591Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/cni-plugins/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934141081Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/cni-plugins/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934202463Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/cni-plugins/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.934320825Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.934413917Z clusters/cbda4714/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-pcvsr/cni-plugins/cni-plugins/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.934450987Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934538099Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/kubemacpool-cert-manager-77db56cf5f-zkgn9.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.934623671Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934632031Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934637861Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/manager/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.934700742Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/manager/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.934917436Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/manager/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935015748Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-77db56cf5f-zkgn9/manager/manager/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935057699Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.935131971Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kubemacpool-mac-controller-manager-579f84888c-26vnt.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.935224682Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.935234913Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/kube-rbac-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.935240353Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/kube-rbac-proxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.935295544Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/kube-rbac-proxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935414676Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/kube-rbac-proxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935513828Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/kube-rbac-proxy/kube-rbac-proxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935582039Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.9355984Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/manager/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93560451Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/manager/manager/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93564955Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/manager/manager/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.935992447Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-579f84888c-26vnt/manager/manager/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936061309Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93611928Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.936200491Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936208852Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936214552Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936266562Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936387715Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936485147Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-f4qwl/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936547958Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936601629Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.93667447Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936688011Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936693791Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.936748042Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936858314Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936956836Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-5d78fd98f8-h7qn6/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.936995587Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937053028Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin-7965f85889-lngk9.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.937126759Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937137029Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/kubevirt-console-plugin/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93714283Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/kubevirt-console-plugin/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937197551Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/kubevirt-console-plugin/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937315973Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937405324Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-lngk9/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937438775Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937498746Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin-7965f85889-x6wfx.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.937592168Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937606588Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/kubevirt-console-plugin/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937612388Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/kubevirt-console-plugin/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937668Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/kubevirt-console-plugin/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937780172Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937877024Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-console-plugin-7965f85889-x6wfx/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.937904204Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.937969506Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.938077038Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938084098Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938087658Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/manager/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938158119Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/manager/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.938318622Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/manager/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.938417854Z clusters/cbda4714/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-54c79cd5bb-rvkc4/manager/manager/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.938455445Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938511496Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/ssp-operator-dd857499f-78hgw.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.938640988Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938651999Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/manager/manager/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.938655719Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/manager/manager/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.93871524Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/manager/manager/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.940164898Z clusters/cbda4714/namespaces/openshift-cnv/pods/ssp-operator-dd857499f-78hgw/manager/manager/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.940324131Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.940373702Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api-5cdf848c59-pks7f.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.940496275Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.940509245Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/virt-api/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.940513755Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/virt-api/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.940630867Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/virt-api/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.944075484Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/virt-api/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.944204456Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-pks7f/virt-api/virt-api/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.944249917Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.944330479Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api-5cdf848c59-zwpqw.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.944435611Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.944445001Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/virt-api/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.944450951Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/virt-api/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.944535782Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/virt-api/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948177503Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/virt-api/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948303495Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-api-5cdf848c59-zwpqw/virt-api/virt-api/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948348916Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.948441258Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller-6446d96d6b-58sn9.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.948565191Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.948576961Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/virt-controller/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.948580811Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/virt-controller/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.948651942Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/virt-controller/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948800925Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/virt-controller/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948923767Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-58sn9/virt-controller/virt-controller/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.948970708Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.94905206Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller-6446d96d6b-pcl5j.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.949151522Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.949158752Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/virt-controller/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.949162282Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/virt-controller/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.949244664Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/virt-controller/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.949765374Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/virt-controller/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.949889396Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-controller-6446d96d6b-pcl5j/virt-controller/virt-controller/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.949934097Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950012999Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/virt-exportproxy-578c957c96-ft59h.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.950111381Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950119781Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/exportproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950123641Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/exportproxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950191242Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/exportproxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.950331755Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/exportproxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.950452307Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-ft59h/exportproxy/exportproxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.950507748Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.95059523Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/virt-exportproxy-578c957c96-jln7s.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.950679552Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950686912Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/exportproxy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950690552Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/exportproxy/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.950754843Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/exportproxy/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.950875725Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/exportproxy/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.950980637Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-exportproxy-578c957c96-jln7s/exportproxy/exportproxy/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.951017268Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951073469Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler-5mwkv.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.951184281Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951192251Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951195911Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/virt-handler/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951255343Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/virt-handler/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.951747952Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.951855774Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-handler/virt-handler/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.951894135Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951903265Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951907165Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/virt-launcher/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.951973327Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/virt-launcher/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.952097409Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.952192951Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-5mwkv/virt-launcher/virt-launcher/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.952236412Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.952290753Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler-cdz7f.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.952392615Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.952403185Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.952407295Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/virt-handler/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.952460936Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/virt-handler/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.952880274Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.952980036Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-handler/virt-handler/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953019167Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953029877Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953035637Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/virt-launcher/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953087108Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/virt-launcher/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953216471Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953313873Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-cdz7f/virt-launcher/virt-launcher/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953352163Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953412985Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler-zwr8g.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.953504496Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953511647Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/virt-handler/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953538707Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/virt-handler/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953604008Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/virt-handler/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953833103Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953933164Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-handler/virt-handler/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.953969165Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953980185Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/virt-launcher/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.953985816Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/virt-launcher/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954044787Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/virt-launcher/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.954171169Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.954271281Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-handler-zwr8g/virt-launcher/virt-launcher/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.954310242Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954375123Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator-57d97484b4-85j7c.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.954480795Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954494285Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/virt-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954498195Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/virt-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954574947Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/virt-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.95474316Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/virt-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.954846272Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-85j7c/virt-operator/virt-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.954879343Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.954941444Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator-57d97484b4-mkz6p.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.955031606Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.955040046Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/virt-operator/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.955043726Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/virt-operator/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.955116118Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/virt-operator/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956383602Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/virt-operator/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956482254Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-operator-57d97484b4-mkz6p/virt-operator/virt-operator/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956534455Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.956601866Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/virt-template-validator-9488bd8cb-75nt8.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.956683538Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.956692168Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.956695628Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/webhook/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.956750509Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/webhook/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956877052Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/webhook/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956984014Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-75nt8/webhook/webhook/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.956999694Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957065345Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/virt-template-validator-9488bd8cb-g47mb.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.957143857Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957152357Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/webhook/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957159987Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/webhook/logs/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957231738Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/webhook/logs/current.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.957352721Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/webhook/logs/previous.insecure.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.957450312Z clusters/cbda4714/namespaces/openshift-cnv/pods/virt-template-validator-9488bd8cb-g47mb/webhook/webhook/logs/previous.log [must-gather-kbswd] OUT 2025-08-11T08:46:34.957487043Z clusters/cbda4714/namespaces/openshift-cnv/policy/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957573175Z clusters/cbda4714/namespaces/openshift-cnv/policy/poddisruptionbudgets.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.957648126Z clusters/cbda4714/namespaces/openshift-cnv/pool.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957708757Z clusters/cbda4714/namespaces/openshift-cnv/pool.kubevirt.io/virtualmachinepools.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.957753929Z clusters/cbda4714/namespaces/openshift-cnv/route.openshift.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.95780805Z clusters/cbda4714/namespaces/openshift-cnv/route.openshift.io/routes.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.957888401Z clusters/cbda4714/namespaces/openshift-cnv/snapshot.kubevirt.io/ [must-gather-kbswd] OUT 2025-08-11T08:46:34.957938242Z clusters/cbda4714/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinerestores.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.958038394Z clusters/cbda4714/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinesnapshotcontents.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.958133886Z clusters/cbda4714/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinesnapshots.yaml [must-gather-kbswd] OUT 2025-08-11T08:46:34.962652293Z [must-gather-kbswd] OUT 2025-08-11T08:46:34.962674014Z sent 7,030 bytes received 1,630,644 bytes 3,275,348.00 bytes/sec [must-gather-kbswd] OUT 2025-08-11T08:46:34.962681764Z total size is 22,706,787 speedup is 13.87 [must-gather ] OUT 2025-08-11T08:46:35.142158668Z namespace/openshift-must-gather-5989m deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: cbda4714-fb9a-4786-bbb6-8eb5fbf3394a ClientVersion: 4.17.10 ClusterVersion: Stable at "4.20.0-0.nightly-2025-07-31-063120" ClusterOperators: clusteroperator/operator-lifecycle-manager is not upgradeable because ClusterServiceVersions blocking minor version upgrades to 4.21.0 or higher: - maximum supported OCP version for openshift-storage/odf-dependencies.v4.19.1-rhodf is 4.20 - maximum supported OCP version for openshift-storage/odf-operator.v4.19.1-rhodf is 4.20 Checking for additional logs in /alabama/cspi/e2e/logs Copying /alabama/cspi/e2e/logs to /logs/artifacts... It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application artifacts