Extract /home/jenkins/oadp-e2e-qe.tar.gz to /alabama/cspi Extract /home/jenkins/oadp-apps-deployer.tar.gz to /alabama/oadpApps Extract /home/jenkins/mtc-python-client.tar.gz to /alabama/pyclient Create and populate /tmp/test-settings... Login as Kubeadmin to the test cluster at https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443... WARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 78 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". Create virtual environment and install required packages... Collecting ansible_runner Downloading ansible_runner-2.4.1-py3-none-any.whl.metadata (3.2 kB) Collecting pexpect>=4.5 (from ansible_runner) Downloading pexpect-4.9.0-py2.py3-none-any.whl.metadata (2.5 kB) Collecting packaging (from ansible_runner) Downloading packaging-25.0-py3-none-any.whl.metadata (3.3 kB) Collecting python-daemon (from ansible_runner) Downloading python_daemon-3.1.2-py3-none-any.whl.metadata (4.8 kB) Collecting pyyaml (from ansible_runner) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB) Collecting ptyprocess>=0.5 (from pexpect>=4.5->ansible_runner) Downloading ptyprocess-0.7.0-py2.py3-none-any.whl.metadata (1.3 kB) Collecting lockfile>=0.10 (from python-daemon->ansible_runner) Downloading lockfile-0.12.2-py2.py3-none-any.whl.metadata (2.4 kB) Downloading ansible_runner-2.4.1-py3-none-any.whl (79 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.6/79.6 kB 1.0 MB/s eta 0:00:00 Downloading pexpect-4.9.0-py2.py3-none-any.whl (63 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.8/63.8 kB 1.1 MB/s eta 0:00:00 Downloading packaging-25.0-py3-none-any.whl (66 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.5/66.5 kB 1.0 MB/s eta 0:00:00 Downloading python_daemon-3.1.2-py3-none-any.whl (30 kB) Downloading PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (767 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 767.5/767.5 kB 8.8 MB/s eta 0:00:00 Downloading lockfile-0.12.2-py2.py3-none-any.whl (13 kB) Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Installing collected packages: ptyprocess, lockfile, pyyaml, python-daemon, pexpect, packaging, ansible_runner Successfully installed ansible_runner-2.4.1 lockfile-0.12.2 packaging-25.0 pexpect-4.9.0 ptyprocess-0.7.0 python-daemon-3.1.2 pyyaml-6.0.2 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip Processing /alabama/oadpApps Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Building wheels for collected packages: ocpdeployer Building wheel for ocpdeployer (pyproject.toml): started Building wheel for ocpdeployer (pyproject.toml): finished with status 'done' Created wheel for ocpdeployer: filename=ocpdeployer-0.0.1-py2.py3-none-any.whl size=235616 sha256=658e2cf47a9203a90257111cca056250649241982ea9d47a442a8f5be0c442f3 Stored in directory: /tmp/pip-ephem-wheel-cache-00n4wqp6/wheels/55/c3/15/eb89266a7928fafe53678a24892891bbfb18405fbd475eb4c6 Successfully built ocpdeployer Installing collected packages: ocpdeployer Successfully installed ocpdeployer-0.0.1 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip Processing /alabama/pyclient Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting suds-py3 (from mtc==0.0.1) Downloading suds_py3-1.4.5.0-py3-none-any.whl.metadata (778 bytes) Collecting requests (from mtc==0.0.1) Downloading requests-2.32.5-py3-none-any.whl.metadata (4.9 kB) Collecting jinja2 (from mtc==0.0.1) Downloading jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB) Collecting kubernetes==11.0.0 (from mtc==0.0.1) Downloading kubernetes-11.0.0-py3-none-any.whl.metadata (1.5 kB) Collecting openshift==0.11.2 (from mtc==0.0.1) Downloading openshift-0.11.2.tar.gz (19 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting certifi>=14.05.14 (from kubernetes==11.0.0->mtc==0.0.1) Downloading certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB) Collecting six>=1.9.0 (from kubernetes==11.0.0->mtc==0.0.1) Downloading six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Collecting python-dateutil>=2.5.3 (from kubernetes==11.0.0->mtc==0.0.1) Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting setuptools>=21.0.0 (from kubernetes==11.0.0->mtc==0.0.1) Using cached setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Requirement already satisfied: pyyaml>=3.12 in /alabama/venv/lib64/python3.12/site-packages (from kubernetes==11.0.0->mtc==0.0.1) (6.0.2) Collecting google-auth>=1.0.1 (from kubernetes==11.0.0->mtc==0.0.1) Downloading google_auth-2.40.3-py2.py3-none-any.whl.metadata (6.2 kB) Collecting websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 (from kubernetes==11.0.0->mtc==0.0.1) Downloading websocket_client-1.8.0-py3-none-any.whl.metadata (8.0 kB) Collecting requests-oauthlib (from kubernetes==11.0.0->mtc==0.0.1) Downloading requests_oauthlib-2.0.0-py2.py3-none-any.whl.metadata (11 kB) Collecting urllib3>=1.24.2 (from kubernetes==11.0.0->mtc==0.0.1) Downloading urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB) Collecting python-string-utils (from openshift==0.11.2->mtc==0.0.1) Downloading python_string_utils-1.0.0-py3-none-any.whl.metadata (12 kB) Collecting ruamel.yaml>=0.15 (from openshift==0.11.2->mtc==0.0.1) Downloading ruamel.yaml-0.18.15-py3-none-any.whl.metadata (25 kB) Collecting MarkupSafe>=2.0 (from jinja2->mtc==0.0.1) Downloading MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB) Collecting charset_normalizer<4,>=2 (from requests->mtc==0.0.1) Downloading charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (36 kB) Collecting idna<4,>=2.5 (from requests->mtc==0.0.1) Downloading idna-3.10-py3-none-any.whl.metadata (10 kB) Collecting cachetools<6.0,>=2.0.0 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading cachetools-5.5.2-py3-none-any.whl.metadata (5.4 kB) Collecting pyasn1-modules>=0.2.1 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading pyasn1_modules-0.4.2-py3-none-any.whl.metadata (3.5 kB) Collecting rsa<5,>=3.1.4 (from google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading rsa-4.9.1-py3-none-any.whl.metadata (5.6 kB) Collecting ruamel.yaml.clib>=0.2.7 (from ruamel.yaml>=0.15->openshift==0.11.2->mtc==0.0.1) Downloading ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.7 kB) Collecting oauthlib>=3.0.0 (from requests-oauthlib->kubernetes==11.0.0->mtc==0.0.1) Downloading oauthlib-3.3.1-py3-none-any.whl.metadata (7.9 kB) Collecting pyasn1<0.7.0,>=0.6.1 (from pyasn1-modules>=0.2.1->google-auth>=1.0.1->kubernetes==11.0.0->mtc==0.0.1) Downloading pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB) Downloading kubernetes-11.0.0-py3-none-any.whl (1.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 12.2 MB/s eta 0:00:00 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.9/134.9 kB 1.2 MB/s eta 0:00:00 Downloading requests-2.32.5-py3-none-any.whl (64 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.7/64.7 kB 693.9 kB/s eta 0:00:00 Downloading suds_py3-1.4.5.0-py3-none-any.whl (298 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.8/298.8 kB 3.5 MB/s eta 0:00:00 Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 161.2/161.2 kB 2.6 MB/s eta 0:00:00 Downloading charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (151 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 151.8/151.8 kB 1.5 MB/s eta 0:00:00 Downloading google_auth-2.40.3-py2.py3-none-any.whl (216 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 216.1/216.1 kB 2.7 MB/s eta 0:00:00 Downloading idna-3.10-py3-none-any.whl (70 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 516.9 kB/s eta 0:00:00 Downloading MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB) Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 1.9 MB/s eta 0:00:00 Downloading ruamel.yaml-0.18.15-py3-none-any.whl (119 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.7/119.7 kB 1.4 MB/s eta 0:00:00 Using cached setuptools-80.9.0-py3-none-any.whl (1.2 MB) Downloading six-1.17.0-py2.py3-none-any.whl (11 kB) Downloading urllib3-2.5.0-py3-none-any.whl (129 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.8/129.8 kB 2.2 MB/s eta 0:00:00 Downloading websocket_client-1.8.0-py3-none-any.whl (58 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.8/58.8 kB 553.4 kB/s eta 0:00:00 Downloading python_string_utils-1.0.0-py3-none-any.whl (26 kB) Downloading requests_oauthlib-2.0.0-py2.py3-none-any.whl (24 kB) Downloading cachetools-5.5.2-py3-none-any.whl (10 kB) Downloading oauthlib-3.3.1-py3-none-any.whl (160 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 1.7 MB/s eta 0:00:00 Downloading pyasn1_modules-0.4.2-py3-none-any.whl (181 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.3/181.3 kB 2.2 MB/s eta 0:00:00 Downloading rsa-4.9.1-py3-none-any.whl (34 kB) Downloading ruamel.yaml.clib-0.2.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (754 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 754.1/754.1 kB 6.9 MB/s eta 0:00:00 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 1.5 MB/s eta 0:00:00 Building wheels for collected packages: mtc, openshift Building wheel for mtc (pyproject.toml): started Building wheel for mtc (pyproject.toml): finished with status 'done' Created wheel for mtc: filename=mtc-0.0.1-py3-none-any.whl size=31146 sha256=0184d40b4d566bcd2fb003860028b76ed51516f57aa5aa55bc4a4c2fdfe3ff26 Stored in directory: /tmp/pip-ephem-wheel-cache-htudjpr4/wheels/f1/2c/83/c09cb54cb0e821a8186cf5320758c27e7227ec862045210509 Building wheel for openshift (pyproject.toml): started Building wheel for openshift (pyproject.toml): finished with status 'done' Created wheel for openshift: filename=openshift-0.11.2-py3-none-any.whl size=19881 sha256=b13b6df624658e48fdd9b7d1cb9b758af05ac4e9a339b134de5528dc87cf6f34 Stored in directory: /alabama/.cache/pip/wheels/34/b7/02/4eb142942314b119c5fb3d4e595ac59486c1f3d79ff665397d Successfully built mtc openshift Installing collected packages: suds-py3, websocket-client, urllib3, six, setuptools, ruamel.yaml.clib, python-string-utils, pyasn1, oauthlib, MarkupSafe, idna, charset_normalizer, certifi, cachetools, ruamel.yaml, rsa, requests, python-dateutil, pyasn1-modules, jinja2, requests-oauthlib, google-auth, kubernetes, openshift, mtc Successfully installed MarkupSafe-3.0.2 cachetools-5.5.2 certifi-2025.8.3 charset_normalizer-3.4.3 google-auth-2.40.3 idna-3.10 jinja2-3.1.6 kubernetes-11.0.0 mtc-0.0.1 oauthlib-3.3.1 openshift-0.11.2 pyasn1-0.6.1 pyasn1-modules-0.4.2 python-dateutil-2.9.0.post0 python-string-utils-1.0.0 requests-2.32.5 requests-oauthlib-2.0.0 rsa-4.9.1 ruamel.yaml-0.18.15 ruamel.yaml.clib-0.2.12 setuptools-80.9.0 six-1.17.0 suds-py3-1.4.5.0 urllib3-2.5.0 websocket-client-1.8.0 [notice] A new release of pip is available: 23.3.2 -> 25.2 [notice] To update, run: pip install --upgrade pip go: downloading go1.24.1 (linux/amd64) go: downloading github.com/onsi/ginkgo/v2 v2.23.4 go: downloading github.com/onsi/gomega v1.36.3 go: downloading github.com/vmware-tanzu/velero v1.16.0 go: downloading k8s.io/apimachinery v0.31.3 go: downloading k8s.io/api v0.31.3 go: downloading sigs.k8s.io/controller-runtime v0.19.3 go: downloading github.com/migtools/oadp-non-admin v0.0.0-20250409143544-08533a6c302d go: downloading k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 go: downloading github.com/openshift/oadp-operator v1.0.2-0.20250530205020-5a814a098127 go: downloading k8s.io/client-go v0.31.3 go: downloading github.com/operator-framework/api v0.14.1-0.20220413143725-33310d6154f3 go: downloading github.com/andygrunwald/go-jira v1.16.0 go: downloading github.com/apenella/go-ansible v1.1.5 go: downloading github.com/aws/aws-sdk-go v1.44.253 go: downloading github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0 go: downloading github.com/openshift/api v0.0.0-20230414143018-3367bc7e6ac7 go: downloading github.com/google/uuid v1.6.0 go: downloading github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3 go: downloading k8s.io/kubectl v0.30.5 go: downloading sigs.k8s.io/yaml v1.4.0 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/google/go-cmp v0.7.0 go: downloading github.com/go-logr/logr v1.4.2 go: downloading github.com/evanphx/json-patch/v5 v5.9.0 go: downloading k8s.io/klog/v2 v2.130.1 go: downloading github.com/fatih/structs v1.1.0 go: downloading github.com/golang-jwt/jwt/v4 v4.5.0 go: downloading github.com/google/go-querystring v1.1.0 go: downloading github.com/evanphx/json-patch v5.6.0+incompatible go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/trivago/tgo v1.0.7 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/google/gofuzz v1.2.0 go: downloading gopkg.in/inf.v0 v0.9.1 go: downloading github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace go: downloading github.com/apenella/go-common-utils/data v0.0.0-20210528133155-34ba915e28c8 go: downloading github.com/apenella/go-common-utils/error v0.0.0-20210528133155-34ba915e28c8 go: downloading github.com/sirupsen/logrus v1.9.3 go: downloading github.com/imdario/mergo v0.3.13 go: downloading golang.org/x/term v0.30.0 go: downloading golang.org/x/net v0.37.0 go: downloading github.com/gorilla/websocket v1.5.0 go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.4.1 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading go.uber.org/automaxprocs v1.6.0 go: downloading golang.org/x/sys v0.32.0 go: downloading sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd go: downloading k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 go: downloading github.com/stretchr/testify v1.10.0 go: downloading gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c go: downloading k8s.io/apiextensions-apiserver v0.31.3 go: downloading golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 go: downloading gopkg.in/evanphx/json-patch.v4 v4.12.0 go: downloading github.com/go-logr/zapr v1.3.0 go: downloading go.uber.org/zap v1.27.0 go: downloading github.com/blang/semver/v4 v4.0.0 go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da go: downloading go.uber.org/goleak v1.3.0 go: downloading github.com/google/gnostic-models v0.6.8 go: downloading github.com/golang/protobuf v1.5.4 go: downloading google.golang.org/protobuf v1.36.5 go: downloading golang.org/x/time v0.9.0 go: downloading golang.org/x/oauth2 v0.27.0 go: downloading github.com/spf13/cobra v1.8.1 go: downloading k8s.io/cli-runtime v0.31.3 go: downloading k8s.io/component-base v0.31.3 go: downloading github.com/moby/spdystream v0.4.0 go: downloading github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 go: downloading github.com/json-iterator/go v1.1.12 go: downloading github.com/go-task/slim-sprig/v3 v3.0.0 go: downloading golang.org/x/tools v0.31.0 go: downloading golang.org/x/text v0.23.0 go: downloading github.com/aws/aws-sdk-go-v2 v1.30.3 go: downloading github.com/aws/aws-sdk-go-v2/config v1.26.3 go: downloading github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.15.11 go: downloading github.com/aws/aws-sdk-go-v2/service/s3 v1.48.0 go: downloading github.com/kr/pretty v0.3.1 go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 go: downloading github.com/go-openapi/jsonreference v0.20.2 go: downloading github.com/go-openapi/swag v0.22.4 go: downloading go.uber.org/multierr v1.11.0 go: downloading github.com/fxamacker/cbor/v2 v2.7.0 go: downloading github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 go: downloading github.com/inconshreveable/mousetrap v1.1.0 go: downloading github.com/jonboulle/clockwork v0.2.2 go: downloading k8s.io/component-helpers v0.30.5 go: downloading github.com/daviddengcn/go-colortext v1.0.0 go: downloading github.com/distribution/reference v0.5.0 go: downloading github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de go: downloading github.com/moby/term v0.5.0 go: downloading github.com/fvbommel/sortorder v1.1.0 go: downloading sigs.k8s.io/kustomize/kustomize/v5 v5.0.4-0.20230601165947-6ce0bf390ce3 go: downloading sigs.k8s.io/kustomize/kyaml v0.17.1 go: downloading github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d go: downloading github.com/lithammer/dedent v1.1.0 go: downloading k8s.io/metrics v0.31.3 go: downloading github.com/chai2010/gettext-go v1.0.2 go: downloading github.com/MakeNowJust/heredoc v1.0.0 go: downloading github.com/mitchellh/go-wordwrap v1.0.1 go: downloading github.com/russross/blackfriday/v2 v2.1.0 go: downloading github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f go: downloading github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/kubernetes-csi/external-snapshotter/client/v7 v7.0.0 go: downloading github.com/aws/smithy-go v1.20.3 go: downloading github.com/aws/aws-sdk-go-v2/credentials v1.17.26 go: downloading github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.11 go: downloading github.com/aws/aws-sdk-go-v2/internal/ini v1.8.0 go: downloading github.com/aws/aws-sdk-go-v2/service/sso v1.22.3 go: downloading github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.4 go: downloading github.com/aws/aws-sdk-go-v2/service/sts v1.30.3 go: downloading github.com/kr/text v0.2.0 go: downloading github.com/rogpeppe/go-internal v1.12.0 go: downloading github.com/go-openapi/jsonpointer v0.19.6 go: downloading github.com/mailru/easyjson v0.7.7 go: downloading github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 go: downloading github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.15 go: downloading github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.10 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.3 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.10 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.17 go: downloading github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.10 go: downloading github.com/emicklei/go-restful/v3 v3.11.0 go: downloading github.com/x448/float16 v0.8.4 go: downloading golang.org/x/sync v0.12.0 go: downloading sigs.k8s.io/kustomize/api v0.17.2 go: downloading github.com/fatih/camelcase v1.0.0 go: downloading github.com/golangplus/testing v1.0.0 go: downloading github.com/opencontainers/go-digest v1.0.0 go: downloading github.com/creack/pty v1.1.18 go: downloading github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 go: downloading github.com/peterbourgon/diskv v2.0.1+incompatible go: downloading github.com/prashantv/gostub v1.1.0 go: downloading github.com/spf13/afero v1.10.0 go: downloading github.com/josharian/intern v1.0.0 go: downloading github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.15 go: downloading github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 go: downloading github.com/go-errors/errors v1.4.2 go: downloading github.com/prometheus/client_golang v1.20.5 go: downloading github.com/google/btree v1.0.1 go: downloading github.com/stretchr/objx v0.5.2 go: downloading gomodules.xyz/jsonpatch/v2 v2.4.0 go: downloading github.com/prometheus/client_model v0.6.1 go: downloading github.com/sergi/go-diff v1.2.0 go: downloading github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 go: downloading github.com/xlab/treeprint v1.2.0 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading go.starlark.net v0.0.0-20230525235612-a134d8f9ddca go: downloading github.com/cespare/xxhash/v2 v2.3.0 go: downloading github.com/prometheus/common v0.62.0 go: downloading github.com/prometheus/procfs v0.15.1 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/klauspost/compress v1.17.11 go: downloading github.com/kylelemons/godebug v1.1.0 go: downloading github.com/jmespath/go-jmespath v0.4.0 go: downloading github.com/jmespath/go-jmespath/internal/testify v1.5.1 storageclass.storage.k8s.io/gp2-csi annotated storageclass.storage.k8s.io/gp3-csi annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd-virtualization annotated storageclass.storage.k8s.io/odf-operator-cephfs annotated storageclass.storage.k8s.io/openshift-storage.noobaa.io annotated storageclass.storage.k8s.io/odf-operator-ceph-rbd annotated + readonly 'RED=\e[31m' + RED='\e[31m' + readonly 'BLUE=\033[34m' + BLUE='\033[34m' + readonly 'CLEAR=\e[39m' + CLEAR='\e[39m' ++ oc get infrastructures cluster -o 'jsonpath={.status.platform}' ++ awk '{print tolower($0)}' + CLOUD_PROVIDER=aws + [[ '' == \t\r\u\e ]] + echo /home/jenkins/.kube/config /home/jenkins/.kube/config + [[ aws == *-arm* ]] + [[ aws == *-fips* ]] + E2E_TIMEOUT_MULTIPLIER=2 + export NAMESPACE=openshift-adp + NAMESPACE=openshift-adp + export PROVIDER=aws + PROVIDER=aws ++ echo aws ++ awk '{print tolower($0)}' + BACKUP_LOCATION=aws + export BACKUP_LOCATION=aws + BACKUP_LOCATION=aws + export BUCKET=ci-op-cl9vhfrj-interopoadp + BUCKET=ci-op-cl9vhfrj-interopoadp + OADP_CREDS_FILE=/tmp/test-settings/credentials + OADP_VSL_CREDS_FILE=/tmp/test-settings/aws_vsl_creds +++ readlink -f /alabama/cspi/test_settings/scripts/test_runner.sh ++ dirname /alabama/cspi/test_settings/scripts/test_runner.sh + readonly SCRIPT_DIR=/alabama/cspi/test_settings/scripts + SCRIPT_DIR=/alabama/cspi/test_settings/scripts ++ cd /alabama/cspi/test_settings/scripts ++ git rev-parse --show-toplevel + readonly TOP_DIR=/alabama/cspi + TOP_DIR=/alabama/cspi + echo /alabama/cspi /alabama/cspi + TESTS_FOLDER=/alabama/cspi/e2e/kubevirt-plugin ++ oc get nodes -o 'jsonpath={.items[*].metadata.labels.kubernetes\.io/arch}' ++ tr ' ' '\n' ++ sort -u ++ xargs + export NODES_ARCHITECTURE=amd64 + NODES_ARCHITECTURE=amd64 + export OADP_REPOSITORY=redhat + OADP_REPOSITORY=redhat + SKIP_DPA_CREATION=false ++ oc get ns openshift-storage ++ echo true + OPENSHIFT_STORAGE=true + '[' redhat == upstream-velero ']' + '[' true == true ']' ++ oc get sc ++ awk '$1 ~ /^.+ceph-rbd$/ {print $1}' ++ tail -1 + CEPH_RBD_STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + export CEPH_RBD_STORAGE_CLASS + echo 'ceph-rbd StorageClass found: odf-operator-ceph-rbd' ceph-rbd StorageClass found: odf-operator-ceph-rbd ++ oc get storageclass -o 'jsonpath={range .items[*]}{@.metadata.name}{" "}{@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}{"\n"}{end}' ++ awk '$2=="true"{print $1}' ++ wc -l + NUM_DEFAULT_STORAGE_CLASS=1 + '[' 1 -ne 1 ']' ++ oc get storageclass -o 'jsonpath={.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=='\''true'\'')].metadata.name}' + DEFAULT_SC=odf-operator-ceph-rbd + export STORAGE_CLASS=odf-operator-ceph-rbd + STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + '[' odf-operator-ceph-rbd '!=' odf-operator-ceph-rbd ']' + export STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + echo 'Using the StorageClass for openshift-storage: odf-operator-ceph-rbd' Using the StorageClass for openshift-storage: odf-operator-ceph-rbd + [[ amd64 != \a\m\d\6\4 ]] + TEST_FILTER='!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' + [[ aws =~ ^osp ]] + [[ aws =~ ^vsphere ]] + [[ aws =~ ^gcp-wif ]] + [[ aws =~ ^ibmcloud ]] ++ oc config current-context ++ awk -F / '{print $2}' + SETTINGS_TMP=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 + mkdir -p /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 ++ oc get authentication cluster -o 'jsonpath={.spec.serviceAccountIssuer}' + IS_OIDC= + '[' '!' -z ']' + [[ aws == \a\w\s ]] + export PROVIDER=aws + PROVIDER=aws + export CREDS_SECRET_REF=cloud-credentials + CREDS_SECRET_REF=cloud-credentials ++ oc get infrastructures cluster -o 'jsonpath={.status.platformStatus.aws.region}' --allow-missing-template-keys=false + export REGION=us-east-1 + REGION=us-east-1 + settings_script=aws_settings.sh + '[' aws == aws-sts ']' + BUCKET=ci-op-cl9vhfrj-interopoadp + TMP_DIR=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 + source /alabama/cspi/test_settings/scripts/aws_settings.sh ++ cat ++ [[ aws == *aws* ]] ++ cat ++ echo -e '\n }\n}' +++ cat /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json ++ x='{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-cl9vhfrj-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ echo '{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-cl9vhfrj-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ grep -o '^[^#]*' + FILE_SETTINGS_NAME=settings.json + printf '\033[34mGenerated settings file under /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json\e[39m\n' Generated settings file under /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json + cat /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json ++ oc get volumesnapshotclass -o name + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass annotated ++ ./e2e/must-gather/get-latest-build.sh + oc get configmaps -n default must-gather-image + UPSTREAM_VERSION=99.0.0 ++ oc get OperatorCondition -n openshift-adp -o 'jsonpath={.items[*].metadata.name}' ++ awk -F v '{print $2}' + OADP_VERSION=1.5.0 + '[' -z 1.5.0 ']' + '[' 1.5.0 == 99.0.0 ']' ++ oc get sub redhat-oadp-operator -n openshift-adp -o 'jsonpath={.spec.source}' + OADP_REPO=redhat-operators + '[' -z redhat-operators ']' + '[' redhat-operators == redhat-operators ']' + REGISTRY_PATH=registry.redhat.io/oadp/oadp-mustgather-rhel9: + TAG=1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + echo registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + '[' -z registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 ']' + export NUM_OF_OADP_INSTANCES=1 + NUM_OF_OADP_INSTANCES=1 ++ echo --skip=tc-id:OADP-555 ++ tr ' ' '\n' ++ grep '^--' ++ tr '\n' ' ' + GINKO_PARAM='--skip=tc-id:OADP-555 ' ++ echo --skip=tc-id:OADP-555 ++ tr ' ' '\n' ++ grep '^-' ++ grep -v '^--' ++ tr '\n' ' ' + TEST_PARAM= + ginkgo run --nodes=1 -mod=mod --show-node-events --flake-attempts 3 --junit-report=/logs/artifacts/junit_oadp_cnv_results.xml '--label-filter=!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' --skip=tc-id:OADP-555 -p /alabama/cspi/e2e/kubevirt-plugin/ -- -credentials_file=/tmp/test-settings/credentials -vsl_credentials_file=/tmp/test-settings/aws_vsl_creds -oadp_namespace=openshift-adp -settings=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json -must_gather_image=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 -timeout_multiplier=2 -skip_dpa_creation=false 2025/09/01 07:53:41 maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined Ginkgo detected a version mismatch between the Ginkgo CLI and the version of Ginkgo imported by your packages: Ginkgo CLI Version: 2.25.2 Mismatched package versions found: 2.23.4 used by kubevirt-plugin Ginkgo will continue to attempt to run but you may see errors (including flag parsing errors) and should either update your go.mod or your version of the Ginkgo CLI to match. To install the matching version of the CLI run go install github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file. Alternatively you can use go run github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file to invoke the matching version of the Ginkgo CLI. If you are attempting to test multiple packages that each have a different version of the Ginkgo library with a single Ginkgo CLI that is currently unsupported.  2025/09/01 07:54:42 Setting up clients 2025/09/01 07:54:42 Getting default StorageClass... 2025/09/01 07:54:42 Checking default storage class count Run the command: oc get sc 2025/09/01 07:54:42 Got default StorageClass odf-operator-ceph-rbd 2025/09/01 07:54:42 oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 69m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 69m odf-operator-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 6m42s odf-operator-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 6m42s odf-operator-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 6m42s openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 2m51s 2025/09/01 07:54:42 Using velero prefix: velero-e2e-kubevirt-eb7aa119-8708-11f0-90d3-0a580a81b6e7 Running Suite: OADP E2E Virtualization Workloads Suite - /alabama/cspi/e2e/kubevirt-plugin ========================================================================================== Random Seed: 1756713221 Will run 4 of 5 specs ------------------------------ [BeforeSuite]  /alabama/cspi/e2e/kubevirt-plugin/kubevirt_suite_test.go:62 > Enter [BeforeSuite] TOP-LEVEL @ 09/01/25 07:54:42.867 < Exit [BeforeSuite] TOP-LEVEL @ 09/01/25 07:54:42.891 (24ms) [BeforeSuite] PASSED [0.024 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:35 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 07:54:42.891 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 07:54:42.899 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 07:54:42.899 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 07:54:42.899 (0s) > Enter [It] [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed @ 09/01/25 07:54:42.899 2025/09/01 07:54:42 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 07:54:43.922 2025/09/01 07:54:43 csi 2025/09/01 07:54:43 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "fe6518fc-090c-48d0-b993-abe23de23ce8", "resourceVersion": "69568", "generation": 1, "creationTimestamp": "2025-09-01T07:54:43Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T07:54:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 07:54:43.997 2025/09/01 07:54:43 Waiting for velero pod to be running 2025/09/01 07:54:49 pod: velero-86964b4444-gqj52 is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:47 +0000 UTC } {Initialized False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotInitialized containers with incomplete status: [velero-plugin-for-aws kubevirt-velero-plugin]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC }] 10.0.99.76 [{10.0.99.76}] 10.128.2.70 [{10.128.2.70}] 2025-09-01 07:54:44 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 07:54:47 +0000 UTC,FinishedAt:2025-09-01 07:54:47 +0000 UTC,ContainerID:cri-o://7e5d0684c235a1d4eed4b26b5d0de473785b6473eb0ecbdeceff7ae3184cfdd6,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://7e5d0684c235a1d4eed4b26b5d0de473785b6473eb0ecbdeceff7ae3184cfdd6 0xc000be0929 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000810860}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 0xc000be0988 map[] nil [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000810870}] nil []} {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d 0xc000be09df map[] nil [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000810880}] nil []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000be09fe map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000810890} {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc0008108a0}] nil []}] Burstable [] []} 2025/09/01 07:54:54 pod: velero-86964b4444-gqj52 is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:47 +0000 UTC } {Initialized False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotInitialized containers with incomplete status: [kubevirt-velero-plugin]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 07:54:44 +0000 UTC }] 10.0.99.76 [{10.0.99.76}] 10.128.2.70 [{10.128.2.70}] 2025-09-01 07:54:44 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 07:54:47 +0000 UTC,FinishedAt:2025-09-01 07:54:47 +0000 UTC,ContainerID:cri-o://7e5d0684c235a1d4eed4b26b5d0de473785b6473eb0ecbdeceff7ae3184cfdd6,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://7e5d0684c235a1d4eed4b26b5d0de473785b6473eb0ecbdeceff7ae3184cfdd6 0xc000be12d9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc0008112d0}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 07:54:50 +0000 UTC,FinishedAt:2025-09-01 07:54:50 +0000 UTC,ContainerID:cri-o://849fe0ae86f065f04ae44be483a77ef402578a7e222ef8efc7b883431e582e6c,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://849fe0ae86f065f04ae44be483a77ef402578a7e222ef8efc7b883431e582e6c 0xc000be1338 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000811340}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d 0xc000be13ca map[] nil [{plugins /target false } {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000811350}] nil []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000be140e map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000811360} {kube-api-access-mprsk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000811370}] nil []}] Burstable [] []} 2025/09/01 07:54:59 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 07:54:59.035 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 07:54:59 The 'openshift-storage' namespace exists 2025/09/01 07:54:59 Checking default storage class count 2025/09/01 07:54:59 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 07:54:59 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 07:54:59 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 07:54:59 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-kubevirt @ 09/01/25 07:54:59.411 2025/09/01 07:54:59 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } /usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 'api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=18  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025/09/01 07:56:10 2025-09-01 07:55:04,030 p=20274 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:55:04,030 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:04,288 p=20274 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:55:04,288 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:04,599 p=20274 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:55:04,599 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:04,874 p=20274 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:55:04,874 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:04,892 p=20274 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:55:04,892 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:04,920 p=20274 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:55:04,920 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:04,931 p=20274 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:55:04,932 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:55:10,969 p=20274 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:55:10,969 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:11,011 p=20274 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:55:11,011 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:11,028 p=20274 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:55:11,028 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:11,029 p=20274 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:55:11,589 p=20274 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:55:11,589 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:55:12,596 p=20274 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-09-01 07:55:12,596 p=20274 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:55:12,597 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:13,380 p=20274 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-09-01 07:55:13,381 p=20274 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:55:14,197 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-09-01 07:55:19,805 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-09-01 07:55:25,441 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-09-01 07:55:31,085 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-09-01 07:55:36,720 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-09-01 07:55:42,375 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-09-01 07:55:48,024 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-09-01 07:55:53,641 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). 2025-09-01 07:55:59,263 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). 2025-09-01 07:56:04,883 p=20274 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). 2025-09-01 07:56:10,516 p=20274 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 07:56:10,516 p=20274 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:10,618 p=20274 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:56:10,618 p=20274 u=1002790000 n=ansible INFO| localhost : ok=18 changed=6 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 07:56:10.667 2025/09/01 07:56:10 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/09/01 07:56:26 2025-09-01 07:56:12,156 p=20649 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:56:12,156 p=20649 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:12,416 p=20649 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:56:12,416 p=20649 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:12,669 p=20649 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:56:12,669 p=20649 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:12,929 p=20649 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:56:12,929 p=20649 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:12,944 p=20649 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:56:12,944 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:12,961 p=20649 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:56:12,962 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:12,973 p=20649 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:56:12,974 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:56:13,287 p=20649 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:56:13,287 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:13,315 p=20649 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:56:13,316 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:13,335 p=20649 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:56:13,335 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:13,337 p=20649 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:56:13,894 p=20649 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:56:13,894 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:14,832 p=20649 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 07:56:14,832 p=20649 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:56:14,832 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:15,495 p=20649 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-09-01 07:56:21,133 p=20649 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-09-01 07:56:26,748 p=20649 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-09-01 07:56:26,749 p=20649 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:26,753 p=20649 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:56:26,753 p=20649 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2025/09/01 07:56:26 {{ } { } [{{ } {test-vm-dv test-oadp-185 fef59522-451b-4e53-ad19-ad00c9763488 71412 0 2025-09-01 07:55:13 +0000 UTC map[app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.0 kubevirt.io/created-by:15e4152e-bb9e-4af3-964b-5053660fd2d2] map[cdi.kubevirt.io/allowClaimAdoption:true cdi.kubevirt.io/createdForDataVolume:b918574f-6a99-49c4-a631-9ee05e4976de cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-vm-dv b918574f-6a99-49c4-a631-9ee05e4976de 0xc000eadd2a 0xc000eadd2b}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-09-01 07:55:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 07:55:58 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-09-01 07:55:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/allowClaimAdoption":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:kubevirt.io/created-by":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b918574f-6a99-49c4-a631-9ee05e4976de\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{5368709120 0} {} 5Gi BinarySI}]} pvc-c1a0f678-94f7-4b59-bf15-a61fdc32c546 0xc00040a890 0xc00040a8a0 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-b918574f-6a99-49c4-a631-9ee05e4976de,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-b918574f-6a99-49c4-a631-9ee05e4976de,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{5368709120 0} {} 5Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 @ 09/01/25 07:56:26.806 2025/09/01 07:56:26 Wait until backup ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 07:56:46 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/09/01 07:56:46 Run velero describe on the backup 2025/09/01 07:56:46 [./velero describe backup ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 -n openshift-adp --details --insecure-skip-tls-verify] 2025/09/01 07:56:47 Exec stderr: "" 2025/09/01 07:56:47 Name: ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-185 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-09-01 07:56:27 +0000 UTC Completed: 2025-09-01 07:56:36 +0000 UTC Expiration: 2025-10-01 07:56:26 +0000 UTC Total items to be backed up: 83 Items backed up: 83 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-185/velero-test-vm-dv-v7vbf: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-185/velero-test-vm-dv-v7vbf/2025-09-01T07:56:34Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-185/velero-test-vm-dv-v7vbf volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-28b0ab49-fefe-47ca-b7e1-e68abf4e0113 Phase: Completed Created: 2025-09-01 07:56:34 +0000 UTC Started: 2025-09-01 07:56:34 +0000 UTC Updated: 2025-09-01 07:56:34 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io - virtualmachineinstances.kubevirt.io - virtualmachines.kubevirt.io apps/v1/ControllerRevision: - test-oadp-185/revision-start-vm-15e4152e-bb9e-4af3-964b-5053660fd2d2-1 authorization.openshift.io/v1/RoleBinding: - test-oadp-185/system:deployers - test-oadp-185/system:image-builders - test-oadp-185/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-185/test-vm-dv kubevirt.io/v1/VirtualMachine: - test-oadp-185/test-vm kubevirt.io/v1/VirtualMachineInstance: - test-oadp-185/test-vm policy/v1/PodDisruptionBudget: - test-oadp-185/kubevirt-disruption-budget-2h8qx rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-185/system:deployers - test-oadp-185/system:image-builders - test-oadp-185/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-185/velero-test-vm-dv-v7vbf snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-28b0ab49-fefe-47ca-b7e1-e68abf4e0113 v1/ConfigMap: - test-oadp-185/kube-root-ca.crt - test-oadp-185/openshift-service-ca.crt v1/Event: - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.186119738144ae82 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611973824eee6b - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611974fc1c3b95 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611975211d9978 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977780c2995 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977780c731a - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.1861197799d82e31 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.186119779b7cba82 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977a6525d99 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977b27071c6 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977de97c312 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977e9c3112e - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977eebd8d3e - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611977eec9cf06 - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611979cd39fcbb - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611979d86ee80b - test-oadp-185/importer-prime-fef59522-451b-4e53-ad19-ad00c9763488.18611979dd72d609 - test-oadp-185/kubevirt-disruption-budget-2h8qx.1861197e0d2d6065 - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.186119737e7272bb - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.18611974eb5b4c89 - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.18611974eb5cf653 - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.18611974f9f0d6bf - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.1861197e08c6e003 - test-oadp-185/prime-fef59522-451b-4e53-ad19-ad00c9763488.1861197f65b18230 - test-oadp-185/test-vm-dv.186119737d1efacc - test-oadp-185/test-vm-dv.186119737d8c952a - test-oadp-185/test-vm-dv.186119737dbb5e72 - test-oadp-185/test-vm-dv.186119737ea8b0ea - test-oadp-185/test-vm-dv.18611974eb2dd1e1 - test-oadp-185/test-vm-dv.18611974eb443756 - test-oadp-185/test-vm-dv.18611974eb4466dd - test-oadp-185/test-vm-dv.18611974ff37ff00 - test-oadp-185/test-vm-dv.18611979ff63a590 - test-oadp-185/test-vm-dv.1861197dbdb0ec4a - test-oadp-185/test-vm-dv.1861197e0996d653 - test-oadp-185/test-vm-dv.1861197e0a13ac99 - test-oadp-185/test-vm-dv.1861197e0bc90de9 - test-oadp-185/test-vm.186119737ad7f95a - test-oadp-185/test-vm.1861197e0c8d8bc8 - test-oadp-185/test-vm.1861197e0d1ae2e3 - test-oadp-185/test-vm.1861197e0fa7da50 - test-oadp-185/test-vm.1861197f6b2ff8e3 - test-oadp-185/test-vm.1861197f6ce8c369 - test-oadp-185/test-vm.1861197f719b940a - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e103f7461 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e42254ed2 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e4225900b - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e58a0776c - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e5a8aa528 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e6f9d6b49 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e75247f98 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197e75335b20 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197ea9c570d3 - test-oadp-185/virt-launcher-test-vm-vbjx5.1861197eaec64f26 v1/Namespace: - test-oadp-185 v1/PersistentVolume: - pvc-c1a0f678-94f7-4b59-bf15-a61fdc32c546 v1/PersistentVolumeClaim: - test-oadp-185/test-vm-dv v1/Pod: - test-oadp-185/virt-launcher-test-vm-vbjx5 v1/Secret: - test-oadp-185/builder-dockercfg-gn7nq - test-oadp-185/default-dockercfg-254x7 - test-oadp-185/deployer-dockercfg-b7hkz v1/ServiceAccount: - test-oadp-185/builder - test-oadp-185/default - test-oadp-185/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-185/test-vm-dv: Snapshot: Operation ID: test-oadp-185/velero-test-vm-dv-v7vbf/2025-09-01T07:56:34Z Snapshot Content Name: snapcontent-28b0ab49-fefe-47ca-b7e1-e68abf4e0113 Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000002-71c0fadf-6c77-4ed8-9063-999b60806c30 Snapshot Size (bytes): 5368709120 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 2 HooksFailed: 0 STEP: Verify backup ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 has completed successfully @ 09/01/25 07:56:47.613 2025/09/01 07:56:47 Backup for case ocp-kubevirt succeeded STEP: Delete the appplication resources ocp-kubevirt @ 09/01/25 07:56:47.672 STEP: Cleanup Application for case ocp-kubevirt @ 09/01/25 07:56:47.672 2025/09/01 07:56:47 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 07:57:16 2025-09-01 07:56:49,159 p=20906 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:56:49,160 p=20906 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:49,412 p=20906 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:56:49,413 p=20906 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:49,662 p=20906 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:56:49,662 p=20906 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:49,910 p=20906 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:56:49,910 p=20906 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:56:49,924 p=20906 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:56:49,924 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:49,942 p=20906 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:56:49,943 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:49,954 p=20906 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:56:49,954 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:56:50,266 p=20906 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:56:50,267 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:50,295 p=20906 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:56:50,295 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:50,312 p=20906 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:56:50,312 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:56:50,314 p=20906 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:56:50,879 p=20906 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:56:50,879 p=20906 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:16,695 p=20906 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** 2025-09-01 07:57:16,696 p=20906 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:57:16,696 p=20906 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:16,865 p=20906 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:57:16,865 p=20906 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/09/01 07:57:16 Creating restore ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 for case ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 STEP: Create restore ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 from backup ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 @ 09/01/25 07:57:16.911 2025/09/01 07:57:16 Wait until restore ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7 is complete restore phase: Completed STEP: Verify restore ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7has completed successfully @ 09/01/25 07:57:26.938 STEP: Verify Application restore @ 09/01/25 07:57:26.941 STEP: Verify Application deployment for case ocp-kubevirt @ 09/01/25 07:57:26.941 2025/09/01 07:57:26 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/09/01 07:57:48 2025-09-01 07:57:28,416 p=21123 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:57:28,417 p=21123 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:28,663 p=21123 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:57:28,663 p=21123 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:28,914 p=21123 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:57:28,914 p=21123 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:29,171 p=21123 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:57:29,171 p=21123 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:29,184 p=21123 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:57:29,184 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:29,203 p=21123 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:57:29,203 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:29,214 p=21123 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:57:29,215 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:57:29,515 p=21123 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:57:29,515 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:29,543 p=21123 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:57:29,543 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:29,560 p=21123 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:57:29,560 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:29,562 p=21123 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:57:30,118 p=21123 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:57:30,118 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:31,045 p=21123 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 07:57:31,046 p=21123 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:57:31,046 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:31,703 p=21123 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-09-01 07:57:37,365 p=21123 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-09-01 07:57:43,019 p=21123 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). 2025-09-01 07:57:48,672 p=21123 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-09-01 07:57:48,672 p=21123 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:48,677 p=21123 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:57:48,677 p=21123 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-185] [kubevirt] Backing up started VM should succeed @ 09/01/25 07:57:48.723 (3m5.824s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 07:57:48.724 2025/09/01 07:57:48 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 07:57:48.724 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:57:48.724 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:57:48.727 (4ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:57:48.727 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:57:48.727 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:57:48.727 2025/09/01 07:57:48 Cleaning app 2025/09/01 07:57:48 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 07:58:12 2025-09-01 07:57:50,210 p=21392 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:57:50,211 p=21392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:50,474 p=21392 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:57:50,474 p=21392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:50,726 p=21392 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:57:50,726 p=21392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:50,974 p=21392 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:57:50,974 p=21392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:57:50,988 p=21392 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:57:50,988 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:51,007 p=21392 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:57:51,007 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:51,019 p=21392 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:57:51,019 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:57:51,326 p=21392 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:57:51,327 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:51,352 p=21392 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:57:51,352 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:51,370 p=21392 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:57:51,370 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:57:51,372 p=21392 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:57:51,928 p=21392 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:57:51,928 p=21392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:12,760 p=21392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-185] *** 2025-09-01 07:58:12,760 p=21392 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:58:12,760 p=21392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:12,928 p=21392 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:58:12,928 p=21392 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:58:12.975 (24.247s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:58:12.975 2025/09/01 07:58:12 Cleaning setup resources for the backup 2025/09/01 07:58:12 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 07:58:12 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 07:58:12 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:58:12.995 (20ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:58:12.995 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 07:58:13.005 (10ms) • [210.114 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-186] [kubevirt] Stopped VM should be restored /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:52 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 07:58:13.005 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 07:58:13.033 (28ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 07:58:13.033 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 07:58:13.033 (0s) > Enter [It] [tc-id:OADP-186] [kubevirt] Stopped VM should be restored @ 09/01/25 07:58:13.033 2025/09/01 07:58:13 Delete all downloadrequest ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-0f611485-3eed-421b-acac-0a8b7d5cff12 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-5695bda9-a8d7-4983-9804-fa83b78ec059 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-7987f3e6-da24-45b3-bcbd-2e9ac4415895 STEP: Create DPA CR @ 09/01/25 07:58:13.165 2025/09/01 07:58:13 csi 2025/09/01 07:58:13 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "c61f5c6a-aad9-4843-b740-aea02f359515", "resourceVersion": "73912", "generation": 1, "creationTimestamp": "2025-09-01T07:58:13Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T07:58:13Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 07:58:13.205 2025/09/01 07:58:13 Waiting for velero pod to be running 2025/09/01 07:58:13 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 07:58:13 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "c61f5c6a-aad9-4843-b740-aea02f359515", "resourceVersion": "73912", "generation": 1, "creationTimestamp": "2025-09-01T07:58:13Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T07:58:13Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 07:58:18.221 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 07:58:18 The 'openshift-storage' namespace exists 2025/09/01 07:58:18 Checking default storage class count 2025/09/01 07:58:18 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 07:58:18 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 07:58:18 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 07:58:18 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-kubevirt @ 09/01/25 07:58:18.537 2025/09/01 07:58:18 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (46 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (45 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Shutdown the VM if required] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM status to become 'Stopped'] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=20  changed=7  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/09/01 07:59:55 2025-09-01 07:58:20,011 p=21630 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:58:20,011 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:20,265 p=21630 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:58:20,266 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:20,513 p=21630 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:58:20,513 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:20,763 p=21630 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:58:20,763 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:20,778 p=21630 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:58:20,778 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:20,796 p=21630 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:58:20,796 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:20,809 p=21630 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:58:20,809 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:58:21,119 p=21630 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:58:21,120 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:21,147 p=21630 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:58:21,147 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:21,165 p=21630 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:58:21,165 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:21,167 p=21630 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:58:21,723 p=21630 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:58:21,723 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:58:22,557 p=21630 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-09-01 07:58:22,557 p=21630 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:58:22,557 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:23,245 p=21630 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-09-01 07:58:23,245 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:58:24,017 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-09-01 07:58:29,628 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-09-01 07:58:35,248 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-09-01 07:58:40,871 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-09-01 07:58:46,493 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-09-01 07:58:52,111 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-09-01 07:58:57,746 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-09-01 07:59:03,370 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). 2025-09-01 07:59:08,993 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). 2025-09-01 07:59:14,627 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). 2025-09-01 07:59:20,256 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). 2025-09-01 07:59:25,923 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). 2025-09-01 07:59:31,548 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). 2025-09-01 07:59:37,179 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). 2025-09-01 07:59:42,838 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (46 retries left). 2025-09-01 07:59:48,475 p=21630 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (45 retries left). 2025-09-01 07:59:54,162 p=21630 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 07:59:54,162 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:54,917 p=21630 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Shutdown the VM if required] *** 2025-09-01 07:59:54,918 p=21630 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:59:55,595 p=21630 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM status to become 'Stopped'] *** 2025-09-01 07:59:55,595 p=21630 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:55,657 p=21630 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:59:55,657 p=21630 u=1002790000 n=ansible INFO| localhost : ok=20 changed=7 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 07:59:55.711 2025/09/01 07:59:55 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 07:59:59 2025-09-01 07:59:57,198 p=22107 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 07:59:57,199 p=22107 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:59:57,457 p=22107 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 07:59:57,457 p=22107 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:59:57,708 p=22107 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 07:59:57,708 p=22107 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:59:57,960 p=22107 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 07:59:57,961 p=22107 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 07:59:57,976 p=22107 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 07:59:57,976 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:57,994 p=22107 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 07:59:57,994 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:58,007 p=22107 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 07:59:58,007 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 07:59:58,328 p=22107 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 07:59:58,329 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:58,357 p=22107 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 07:59:58,357 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:58,374 p=22107 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 07:59:58,375 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:58,376 p=22107 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 07:59:58,937 p=22107 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 07:59:58,937 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:59,853 p=22107 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** 2025-09-01 07:59:59,853 p=22107 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 07:59:59,853 p=22107 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 07:59:59,898 p=22107 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 07:59:59,898 p=22107 u=1002790000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/09/01 07:59:59 {{ } { } [{{ } {test-vm-dv test-oadp-186 6fd61f56-d299-43a1-adee-c5a37f3b295f 75627 0 2025-09-01 07:58:23 +0000 UTC map[app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.0 kubevirt.io/created-by:c419fb6f-9bb0-4c48-aa1c-e77781e3c473] map[cdi.kubevirt.io/allowClaimAdoption:true cdi.kubevirt.io/createdForDataVolume:e75274d4-2258-465e-945c-bd9e328f6d18 cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:test-vm-dv-1756713585 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-vm-dv e75274d4-2258-465e-945c-bd9e328f6d18 0xc00110872a 0xc00110872b}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-09-01 07:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 07:59:35 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-09-01 07:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/allowClaimAdoption":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:kubevirt.io/created-by":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e75274d4-2258-465e-945c-bd9e328f6d18\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 07:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{5368709120 0} {} 5Gi BinarySI}]} pvc-2455787d-1f5d-4e1f-91d2-5fd558a27917 0xc0008110b0 0xc0008110c0 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-e75274d4-2258-465e-945c-bd9e328f6d18,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-e75274d4-2258-465e-945c-bd9e328f6d18,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{5368709120 0} {} 5Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 @ 09/01/25 07:59:59.957 2025/09/01 07:59:59 Wait until backup ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:00:19 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/09/01 08:00:20 Run velero describe on the backup 2025/09/01 08:00:20 [./velero describe backup ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 -n openshift-adp --details --insecure-skip-tls-verify] 2025/09/01 08:00:20 Exec stderr: "" 2025/09/01 08:00:20 Name: ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-186 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-09-01 08:00:00 +0000 UTC Completed: 2025-09-01 08:00:08 +0000 UTC Expiration: 2025-10-01 07:59:59 +0000 UTC Total items to be backed up: 87 Items backed up: 87 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-186/velero-test-vm-dv-zxx8x: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-186/velero-test-vm-dv-zxx8x/2025-09-01T08:00:06Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-186/velero-test-vm-dv-zxx8x volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-adb1a06c-db07-4a58-aac4-26ab73e999ff Phase: Completed Created: 2025-09-01 08:00:06 +0000 UTC Started: 2025-09-01 08:00:06 +0000 UTC Updated: 2025-09-01 08:00:07 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io - reclaimspacecronjobs.csiaddons.openshift.io - virtualmachines.kubevirt.io authorization.openshift.io/v1/RoleBinding: - test-oadp-186/system:deployers - test-oadp-186/system:image-builders - test-oadp-186/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-186/test-vm-dv csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - test-oadp-186/test-vm-dv-1756713585 kubevirt.io/v1/VirtualMachine: - test-oadp-186/test-vm rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-186/system:deployers - test-oadp-186/system:image-builders - test-oadp-186/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-186/velero-test-vm-dv-zxx8x snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-adb1a06c-db07-4a58-aac4-26ab73e999ff v1/ConfigMap: - test-oadp-186/kube-root-ca.crt - test-oadp-186/openshift-service-ca.crt v1/Event: - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.1861199fb28e06b5 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a252d4bb9f - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a262077d72 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a288420a98 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4b27f77bb - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4b27fc9bd - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4db30c9da - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4ddff4d76 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4e92ccb3a - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a4f57c76ed - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a517d9bbd1 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a5228e23ad - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a5278139ab - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a5278f7af4 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a531f3ecd7 - test-oadp-186/importer-prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a536f4aab9 - test-oadp-186/kubevirt-disruption-budget-sk6dl.186119b098b158f2 - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.1861199fb1724f0c - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a252cdc7b1 - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a252d0eb82 - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a25f7ce54d - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119a99bef2252 - test-oadp-186/prime-6fd61f56-d299-43a1-adee-c5a37f3b295f.186119b3c9493168 - test-oadp-186/test-vm-dv.1861199fb060b2d4 - test-oadp-186/test-vm-dv.1861199fb0c254db - test-oadp-186/test-vm-dv.1861199fb0daad5a - test-oadp-186/test-vm-dv.1861199fb0dcf77f - test-oadp-186/test-vm-dv.1861199fb0dd120f - test-oadp-186/test-vm-dv.1861199fb1152ecc - test-oadp-186/test-vm-dv.186119a265c8639f - test-oadp-186/test-vm-dv.186119a559afd4d7 - test-oadp-186/test-vm-dv.186119a91757b489 - test-oadp-186/test-vm-dv.186119b094cd780c - test-oadp-186/test-vm-dv.186119b094f96cd3 - test-oadp-186/test-vm-dv.186119b097b80ef1 - test-oadp-186/test-vm.1861199fae00cb8d - test-oadp-186/test-vm.186119b0982b3462 - test-oadp-186/test-vm.186119b0988d80cc - test-oadp-186/test-vm.186119b09ce35f99 - test-oadp-186/test-vm.186119b415d1fa49 - test-oadp-186/test-vm.186119b41781d0e5 - test-oadp-186/test-vm.186119b41b5359ab - test-oadp-186/test-vm.186119b502d347dc - test-oadp-186/test-vm.186119b50322a39b - test-oadp-186/test-vm.186119b5036878e5 - test-oadp-186/test-vm.186119b5109b03b5 - test-oadp-186/test-vm.186119b512537484 - test-oadp-186/test-vm.186119b51515a723 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b09e99193a - test-oadp-186/virt-launcher-test-vm-82fnp.186119b0bef71946 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b2f8e9084b - test-oadp-186/virt-launcher-test-vm-82fnp.186119b2f8e9870d - test-oadp-186/virt-launcher-test-vm-82fnp.186119b3169cbbbd - test-oadp-186/virt-launcher-test-vm-82fnp.186119b318233be0 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b3273d4156 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b32c3bc3a6 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b32c4dd5f9 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b366225e5f - test-oadp-186/virt-launcher-test-vm-82fnp.186119b36b1476b2 - test-oadp-186/virt-launcher-test-vm-82fnp.186119b50389709b - test-oadp-186/virt-launcher-test-vm-82fnp.186119b5038ac40a v1/Namespace: - test-oadp-186 v1/PersistentVolume: - pvc-2455787d-1f5d-4e1f-91d2-5fd558a27917 v1/PersistentVolumeClaim: - test-oadp-186/test-vm-dv v1/Secret: - test-oadp-186/builder-dockercfg-95d9f - test-oadp-186/default-dockercfg-fk7d7 - test-oadp-186/deployer-dockercfg-td4lg v1/ServiceAccount: - test-oadp-186/builder - test-oadp-186/default - test-oadp-186/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-186/test-vm-dv: Snapshot: Operation ID: test-oadp-186/velero-test-vm-dv-zxx8x/2025-09-01T08:00:06Z Snapshot Content Name: snapcontent-adb1a06c-db07-4a58-aac4-26ab73e999ff Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000002-758a7563-0bd8-43f7-b83d-e26e9eb3064d Snapshot Size (bytes): 5368709120 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 has completed successfully @ 09/01/25 08:00:20.777 2025/09/01 08:00:20 Backup for case ocp-kubevirt succeeded STEP: Delete the appplication resources ocp-kubevirt @ 09/01/25 08:00:20.838 STEP: Cleanup Application for case ocp-kubevirt @ 09/01/25 08:00:20.838 2025/09/01 08:00:20 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:00:45 2025-09-01 08:00:22,375 p=22321 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:00:22,375 p=22321 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:00:22,638 p=22321 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:00:22,639 p=22321 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:00:22,899 p=22321 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:00:22,899 p=22321 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:00:23,162 p=22321 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:00:23,162 p=22321 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:00:23,177 p=22321 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:00:23,177 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:23,196 p=22321 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:00:23,196 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:23,210 p=22321 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:00:23,210 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:00:23,524 p=22321 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:00:23,524 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:23,554 p=22321 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:00:23,554 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:23,573 p=22321 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:00:23,573 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:23,575 p=22321 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:00:24,134 p=22321 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:00:24,134 p=22321 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:00:44,965 p=22321 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** 2025-09-01 08:00:44,965 p=22321 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:00:44,965 p=22321 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:00:45,139 p=22321 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:00:45,140 p=22321 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 2025/09/01 08:00:45 Creating restore ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 for case ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 STEP: Create restore ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 from backup ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:00:45.189 2025/09/01 08:00:45 Wait until restore ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7 is complete restore phase: Finalizing restore phase: Completed STEP: Verify restore ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7has completed successfully @ 09/01/25 08:01:05.217 STEP: Verify Application restore @ 09/01/25 08:01:05.221 STEP: Verify Application deployment for case ocp-kubevirt @ 09/01/25 08:01:05.221 2025/09/01 08:01:05 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:01:09 2025-09-01 08:01:06,715 p=22536 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:01:06,715 p=22536 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:06,964 p=22536 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:01:06,964 p=22536 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:07,222 p=22536 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:01:07,222 p=22536 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:07,471 p=22536 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:01:07,471 p=22536 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:07,485 p=22536 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:01:07,485 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:07,504 p=22536 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:01:07,504 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:07,515 p=22536 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:01:07,516 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:01:07,819 p=22536 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:01:07,819 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:07,846 p=22536 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:01:07,846 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:07,865 p=22536 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:01:07,865 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:07,867 p=22536 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:01:08,426 p=22536 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:01:08,426 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:09,334 p=22536 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Verify VM is not in running state] *** 2025-09-01 08:01:09,334 p=22536 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:01:09,334 p=22536 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:09,378 p=22536 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:01:09,378 p=22536 u=1002790000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-186] [kubevirt] Stopped VM should be restored @ 09/01/25 08:01:09.422 (2m56.389s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:01:09.422 2025/09/01 08:01:09 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:01:09.423 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:09.423 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:09.427 (5ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:09.427 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:09.427 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:09.427 2025/09/01 08:01:09 Cleaning app 2025/09/01 08:01:09 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:01:28 2025-09-01 08:01:10,906 p=22752 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:01:10,906 p=22752 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:11,154 p=22752 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:01:11,154 p=22752 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:11,403 p=22752 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:01:11,403 p=22752 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:11,653 p=22752 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:01:11,653 p=22752 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:11,667 p=22752 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:01:11,668 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:11,686 p=22752 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:01:11,686 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:11,698 p=22752 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:01:11,698 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:01:12,003 p=22752 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:01:12,003 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:12,033 p=22752 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:01:12,033 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:12,050 p=22752 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:01:12,050 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:12,052 p=22752 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:01:12,616 p=22752 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:01:12,616 p=22752 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:28,433 p=22752 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-186] *** 2025-09-01 08:01:28,433 p=22752 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:01:28,433 p=22752 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:28,602 p=22752 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:01:28,603 p=22752 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:28.647 (19.22s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:28.647 2025/09/01 08:01:28 Cleaning setup resources for the backup 2025/09/01 08:01:28 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:01:28 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:01:28 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:28.666 (18ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:28.666 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:01:28.674 (9ms) • [195.669 seconds] ------------------------------ CSI: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-187] [kubevirt] Backup-restore data volume /alabama/cspi/e2e/kubevirt-plugin/backup_restore_csi.go:69 > Enter [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 08:01:28.674 < Exit [BeforeEach] CSI: Backup/Restore Openshift Virtualization Workloads @ 09/01/25 08:01:28.682 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:01:28.682 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:01:28.682 (0s) > Enter [It] [tc-id:OADP-187] [kubevirt] Backup-restore data volume @ 09/01/25 08:01:28.682 2025/09/01 08:01:28 Delete all downloadrequest ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-0393c6b2-f20c-44c7-bad1-83c0122c1d73 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-0c6b09c8-b110-4e59-b8ba-2ca3bd755255 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-12bb1123-e98e-466a-971a-3dd6cc53e2ee STEP: Create DPA CR @ 09/01/25 08:01:28.778 2025/09/01 08:01:28 csi 2025/09/01 08:01:28 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "82d105f7-bf3c-47b6-9517-64474235ce7a", "resourceVersion": "77481", "generation": 1, "creationTimestamp": "2025-09-01T08:01:28Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:01:28Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:01:28.808 2025/09/01 08:01:28 Waiting for velero pod to be running 2025/09/01 08:01:28 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:01:28 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "82d105f7-bf3c-47b6-9517-64474235ce7a", "resourceVersion": "77481", "generation": 1, "creationTimestamp": "2025-09-01T08:01:28Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:01:28Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:01:33.823 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:01:33 The 'openshift-storage' namespace exists 2025/09/01 08:01:33 Checking default storage class count 2025/09/01 08:01:33 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:01:33 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:01:34 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:01:34 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd STEP: Installing application for case ocp-datavolume @ 09/01/25 08:01:34.048 2025/09/01 08:01:34 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Deploy DataVolume test-dv] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:01:38 2025-09-01 08:01:35,522 p=22989 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:01:35,522 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:35,777 p=22989 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:01:35,777 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:36,026 p=22989 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:01:36,027 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:36,278 p=22989 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:01:36,278 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:36,292 p=22989 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:01:36,292 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:36,310 p=22989 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:01:36,310 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:36,322 p=22989 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:01:36,322 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:01:36,630 p=22989 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:01:36,630 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:36,657 p=22989 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:01:36,657 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:36,675 p=22989 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:01:36,676 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:36,677 p=22989 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:01:37,234 p=22989 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:01:37,234 p=22989 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:38,046 p=22989 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Create namespace] *** 2025-09-01 08:01:38,047 p=22989 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:01:38,047 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:38,781 p=22989 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Deploy DataVolume test-dv] *** 2025-09-01 08:01:38,781 p=22989 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:38,819 p=22989 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:01:38,819 p=22989 u=1002790000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:01:38.865 2025/09/01 08:01:38 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (30 retries left). FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:02:04 2025-09-01 08:01:40,350 p=23214 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:01:40,350 p=23214 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:40,607 p=23214 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:01:40,607 p=23214 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:40,855 p=23214 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:01:40,856 p=23214 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:41,106 p=23214 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:01:41,107 p=23214 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:01:41,120 p=23214 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:01:41,120 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:41,138 p=23214 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:01:41,139 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:41,150 p=23214 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:01:41,150 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:01:41,454 p=23214 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:01:41,454 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:41,482 p=23214 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:01:41,482 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:41,499 p=23214 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:01:41,499 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:41,500 p=23214 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:01:42,057 p=23214 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:01:42,057 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:01:42,945 p=23214 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (30 retries left). 2025-09-01 08:01:53,574 p=23214 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for DataVolume to be in Succeeded phase (29 retries left). 2025-09-01 08:02:04,241 p=23214 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** 2025-09-01 08:02:04,242 p=23214 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:02:04,242 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:04,888 p=23214 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** 2025-09-01 08:02:04,888 p=23214 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:04,892 p=23214 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:02:04,892 p=23214 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/09/01 08:02:04 {{ } { } [{{ } {test-dv test-oadp-187 a3c79755-9d7c-4310-97ef-3f3e87503e57 78198 0 2025-09-01 08:01:38 +0000 UTC map[alerts.k8s.io/KubePersistentVolumeFillingUp:disabled app:containerized-data-importer app.kubernetes.io/component:storage app.kubernetes.io/managed-by:cdi-controller app.kubernetes.io/part-of:hyperconverged-cluster app.kubernetes.io/version:4.19.0] map[cdi.kubevirt.io/createdForDataVolume:b0f48d2c-9334-4ced-8eba-9683eab1fa26 cdi.kubevirt.io/storage.bind.immediate.requested:true cdi.kubevirt.io/storage.condition.running:false cdi.kubevirt.io/storage.condition.running.message:Import Complete cdi.kubevirt.io/storage.condition.running.reason:Completed cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion:false cdi.kubevirt.io/storage.pod.phase:Succeeded cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.populator.progress:100.0% cdi.kubevirt.io/storage.preallocation.requested:false cdi.kubevirt.io/storage.usePopulator:true pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [{cdi.kubevirt.io/v1beta1 DataVolume test-dv b0f48d2c-9334-4ced-8eba-9683eab1fa26 0xc000be14f7 0xc000be14f8}] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-09-01 08:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:02:01 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {virt-cdi-controller Update v1 2025-09-01 08:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cdi.kubevirt.io/createdForDataVolume":{},"f:cdi.kubevirt.io/storage.bind.immediate.requested":{},"f:cdi.kubevirt.io/storage.condition.running":{},"f:cdi.kubevirt.io/storage.condition.running.message":{},"f:cdi.kubevirt.io/storage.condition.running.reason":{},"f:cdi.kubevirt.io/storage.contentType":{},"f:cdi.kubevirt.io/storage.deleteAfterCompletion":{},"f:cdi.kubevirt.io/storage.pod.phase":{},"f:cdi.kubevirt.io/storage.pod.restarts":{},"f:cdi.kubevirt.io/storage.populator.progress":{},"f:cdi.kubevirt.io/storage.preallocation.requested":{},"f:cdi.kubevirt.io/storage.usePopulator":{}},"f:labels":{".":{},"f:alerts.k8s.io/KubePersistentVolumeFillingUp":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b0f48d2c-9334-4ced-8eba-9683eab1fa26\"}":{}}},"f:spec":{"f:accessModes":{},"f:dataSourceRef":{".":{},"f:apiGroup":{},"f:kind":{},"f:name":{}},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{104857600 0} {} 100Mi BinarySI}]} pvc-1b0ef4ab-ec10-48d0-beb9-286e993bd0ec 0xc00040b910 0xc00040b920 &TypedLocalObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-b0f48d2c-9334-4ced-8eba-9683eab1fa26,} &TypedObjectReference{APIGroup:*cdi.kubevirt.io,Kind:VolumeImportSource,Name:volume-import-source-b0f48d2c-9334-4ced-8eba-9683eab1fa26,Namespace:nil,} } {Bound [ReadWriteOnce] map[storage:{{104857600 0} {} 100Mi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:02:04.975 2025/09/01 08:02:04 Wait until backup ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:02:24 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/09/01 08:02:25 Run velero describe on the backup 2025/09/01 08:02:25 [./velero describe backup ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 -n openshift-adp --details --insecure-skip-tls-verify] 2025/09/01 08:02:25 Exec stderr: "" 2025/09/01 08:02:25 Name: ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: test-oadp-187 Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-09-01 08:02:05 +0000 UTC Completed: 2025-09-01 08:02:13 +0000 UTC Expiration: 2025-10-01 08:02:04 +0000 UTC Total items to be backed up: 49 Items backed up: 49 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io test-oadp-187/velero-test-dv-mzkz8: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: test-oadp-187/velero-test-dv-mzkz8/2025-09-01T08:02:11Z Items to Update: volumesnapshots.snapshot.storage.k8s.io test-oadp-187/velero-test-dv-mzkz8 volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-4b6cc5ca-90c5-40ad-bffb-21a9d471a6ec Phase: Completed Created: 2025-09-01 08:02:11 +0000 UTC Started: 2025-09-01 08:02:11 +0000 UTC Updated: 2025-09-01 08:02:12 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - datavolumes.cdi.kubevirt.io authorization.openshift.io/v1/RoleBinding: - test-oadp-187/system:deployers - test-oadp-187/system:image-builders - test-oadp-187/system:image-pullers cdi.kubevirt.io/v1beta1/DataVolume: - test-oadp-187/test-dv rbac.authorization.k8s.io/v1/RoleBinding: - test-oadp-187/system:deployers - test-oadp-187/system:image-builders - test-oadp-187/system:image-pullers snapshot.storage.k8s.io/v1/VolumeSnapshot: - test-oadp-187/velero-test-dv-mzkz8 snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-4b6cc5ca-90c5-40ad-bffb-21a9d471a6ec v1/ConfigMap: - test-oadp-187/kube-root-ca.crt - test-oadp-187/openshift-service-ca.crt v1/Event: - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cd341c085e - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfa7d0dff9 - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfbf6fe54b - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfe3b1eb19 - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d1cd7916d0 - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d1cee0594c - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d1da9f98a3 - test-oadp-187/importer-prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d1df9bc27f - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cd31f51398 - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfa7c44867 - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfa7cfc7f9 - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119cfbb56cb2c - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d27b105ec0 - test-oadp-187/prime-a3c79755-9d7c-4310-97ef-3f3e87503e57.186119d33868ea20 - test-oadp-187/test-dv.186119cd30c59642 - test-oadp-187/test-dv.186119cd3121b4fa - test-oadp-187/test-dv.186119cd31552921 - test-oadp-187/test-dv.186119cd3188a194 - test-oadp-187/test-dv.186119cd318b0105 - test-oadp-187/test-dv.186119cd318b1cff - test-oadp-187/test-dv.186119cd31b4e5a4 - test-oadp-187/test-dv.186119cfc2e413e6 - test-oadp-187/test-dv.186119d1f5b2e516 - test-oadp-187/test-dv.186119d231bd501f - test-oadp-187/test-dv.186119d27c858eb1 - test-oadp-187/test-dv.186119d27c873d49 - test-oadp-187/test-dv.186119d27e86ed74 v1/Namespace: - test-oadp-187 v1/PersistentVolume: - pvc-1b0ef4ab-ec10-48d0-beb9-286e993bd0ec v1/PersistentVolumeClaim: - test-oadp-187/test-dv v1/Secret: - test-oadp-187/builder-dockercfg-8dhv4 - test-oadp-187/default-dockercfg-zfb52 - test-oadp-187/deployer-dockercfg-g47zl v1/ServiceAccount: - test-oadp-187/builder - test-oadp-187/default - test-oadp-187/deployer Backup Volumes: Velero-Native Snapshots: CSI Snapshots: test-oadp-187/test-dv: Snapshot: Operation ID: test-oadp-187/velero-test-dv-mzkz8/2025-09-01T08:02:11Z Snapshot Content Name: snapcontent-4b6cc5ca-90c5-40ad-bffb-21a9d471a6ec Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000002-c7862a09-ece7-4e5b-bbcf-e790a79d9d11 Snapshot Size (bytes): 104857600 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 has completed successfully @ 09/01/25 08:02:25.719 2025/09/01 08:02:25 Backup for case ocp-datavolume succeeded STEP: Delete the appplication resources ocp-datavolume @ 09/01/25 08:02:25.773 STEP: Cleanup Application for case ocp-datavolume @ 09/01/25 08:02:25.773 2025/09/01 08:02:25 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/09/01 08:02:49 2025-09-01 08:02:27,256 p=23468 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:02:27,256 p=23468 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:02:27,504 p=23468 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:02:27,504 p=23468 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:02:27,755 p=23468 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:02:27,755 p=23468 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:02:28,010 p=23468 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:02:28,010 p=23468 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:02:28,026 p=23468 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:02:28,026 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:28,043 p=23468 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:02:28,044 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:28,055 p=23468 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:02:28,055 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:02:28,363 p=23468 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:02:28,364 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:28,391 p=23468 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:02:28,391 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:28,409 p=23468 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:02:28,409 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:28,410 p=23468 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:02:28,966 p=23468 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:02:28,966 p=23468 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:02:49,795 p=23468 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** 2025-09-01 08:02:49,795 p=23468 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:02:49,795 p=23468 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:02:49,883 p=23468 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:02:49,883 p=23468 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025/09/01 08:02:49 Creating restore ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 for case ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 STEP: Create restore ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 from backup ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:02:49.93 2025/09/01 08:02:49 Wait until restore ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7 is complete restore phase: Finalizing restore phase: Finalizing restore phase: Completed STEP: Verify restore ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7has completed successfully @ 09/01/25 08:03:19.96 STEP: Verify Application restore @ 09/01/25 08:03:19.964 STEP: Verify Application deployment for case ocp-datavolume @ 09/01/25 08:03:19.964 2025/09/01 08:03:19 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:03:24 2025-09-01 08:03:21,453 p=23677 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:03:21,453 p=23677 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:21,701 p=23677 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:03:21,701 p=23677 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:21,958 p=23677 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:03:21,958 p=23677 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:22,208 p=23677 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:03:22,208 p=23677 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:22,223 p=23677 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:03:22,223 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:22,243 p=23677 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:03:22,244 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:22,257 p=23677 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:03:22,257 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:03:22,569 p=23677 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:03:22,569 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:22,596 p=23677 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:03:22,597 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:22,615 p=23677 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:03:22,615 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:22,617 p=23677 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:03:23,176 p=23677 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:03:23,176 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:24,049 p=23677 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait for DataVolume to be in Succeeded phase] *** 2025-09-01 08:03:24,050 p=23677 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:03:24,050 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:24,702 p=23677 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Wait until there is only one pvc] *** 2025-09-01 08:03:24,702 p=23677 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:24,706 p=23677 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:03:24,706 p=23677 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-187] [kubevirt] Backup-restore data volume @ 09/01/25 08:03:24.753 (1m56.07s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:03:24.753 2025/09/01 08:03:24 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:03:24.753 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:24.753 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:24.756 (3ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:24.756 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:24.757 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:24.757 2025/09/01 08:03:24 Cleaning app 2025/09/01 08:03:24 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025/09/01 08:03:43 2025-09-01 08:03:26,227 p=23903 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:03:26,227 p=23903 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:26,483 p=23903 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:03:26,483 p=23903 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:26,732 p=23903 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:03:26,732 p=23903 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:26,982 p=23903 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:03:26,982 p=23903 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:26,997 p=23903 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:03:26,998 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:27,014 p=23903 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:03:27,014 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:27,027 p=23903 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:03:27,027 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:03:27,330 p=23903 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:03:27,330 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:27,359 p=23903 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:03:27,359 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:27,376 p=23903 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:03:27,376 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:27,378 p=23903 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:03:27,936 p=23903 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:03:27,936 p=23903 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:43,781 p=23903 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-datavolume : Remove namespace test-oadp-187] *** 2025-09-01 08:03:43,781 p=23903 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:03:43,782 p=23903 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:43,871 p=23903 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:03:43,871 p=23903 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:43.92 (19.164s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:43.92 2025/09/01 08:03:43 Cleaning setup resources for the backup 2025/09/01 08:03:43 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:03:43 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:03:43 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:43.94 (20ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:43.94 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:03:43.948 (7ms) • [135.273 seconds] ------------------------------ S ------------------------------ Native CSI Data Mover: Backup/Restore Openshift Virtualization Workloads  [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode /alabama/cspi/e2e/kubevirt-plugin/backup_restore_datamover.go:129 > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:03:43.948 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:03:43.948 (0s) > Enter [It] [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode @ 09/01/25 08:03:43.948 2025/09/01 08:03:43 Delete all downloadrequest ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-72e83b12-ae1c-4166-9610-fde233e507c6 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-762c86bb-fc14-4248-bb97-d67554e5cf83 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-aa7dea75-c003-41a9-8b28-fb2079453656 STEP: Create DPA CR @ 09/01/25 08:03:44.052 2025/09/01 08:03:44 native-datamover 2025/09/01 08:03:44 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "fbdffb7e-ded2-4d6b-a889-20c0b787a762", "resourceVersion": "79898", "generation": 1, "creationTimestamp": "2025-09-01T08:03:44Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:03:44Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:03:44.1 2025/09/01 08:03:44 Waiting for velero pod to be running 2025/09/01 08:03:44 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:03:44 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "fbdffb7e-ded2-4d6b-a889-20c0b787a762", "resourceVersion": "79898", "generation": 1, "creationTimestamp": "2025-09-01T08:03:44Z", "managedFields": [ { "manager": "kubevirt-plugin.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:03:44Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "kubevirt" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:03:49.122 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:03:49 The 'openshift-storage' namespace exists 2025/09/01 08:03:49 Checking default storage class count 2025/09/01 08:03:49 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:03:49 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:03:49 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:03:49 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:03:49 Checking for correct number of running NodeAgent pods... STEP: Installing application for case ocp-kubevirt @ 09/01/25 08:03:49.351 2025/09/01 08:03:49 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=18  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025/09/01 08:05:14 2025-09-01 08:03:50,872 p=24136 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:03:50,872 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:51,158 p=24136 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:03:51,158 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:51,419 p=24136 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:03:51,420 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:51,678 p=24136 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:03:51,678 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:51,693 p=24136 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:03:51,693 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:51,711 p=24136 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:03:51,711 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:51,724 p=24136 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:03:51,724 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:03:52,044 p=24136 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:03:52,044 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:52,074 p=24136 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:03:52,075 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:52,092 p=24136 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:03:52,093 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:52,094 p=24136 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:03:52,672 p=24136 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:03:52,672 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:03:53,529 p=24136 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Create namespace] *** 2025-09-01 08:03:53,529 p=24136 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:03:53,529 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:54,277 p=24136 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Deploy vm test-vm] *** 2025-09-01 08:03:54,277 p=24136 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:03:55,098 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-09-01 08:04:00,743 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (59 retries left). 2025-09-01 08:04:06,428 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (58 retries left). 2025-09-01 08:04:12,092 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (57 retries left). 2025-09-01 08:04:17,739 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (56 retries left). 2025-09-01 08:04:23,367 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (55 retries left). 2025-09-01 08:04:28,997 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (54 retries left). 2025-09-01 08:04:34,615 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (53 retries left). 2025-09-01 08:04:40,267 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (52 retries left). 2025-09-01 08:04:45,905 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (51 retries left). 2025-09-01 08:04:51,565 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (50 retries left). 2025-09-01 08:04:57,184 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (49 retries left). 2025-09-01 08:05:02,824 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (48 retries left). 2025-09-01 08:05:08,445 p=24136 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (47 retries left). 2025-09-01 08:05:14,083 p=24136 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 08:05:14,084 p=24136 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:14,182 p=24136 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:05:14,182 p=24136 u=1002790000 n=ansible INFO| localhost : ok=18 changed=6 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:05:14.229 2025/09/01 08:05:14 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/09/01 08:05:36 2025-09-01 08:05:15,759 p=24557 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:05:15,759 p=24557 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:05:16,014 p=24557 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:05:16,014 p=24557 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:05:16,270 p=24557 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:05:16,270 p=24557 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:05:16,531 p=24557 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:05:16,531 p=24557 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:05:16,546 p=24557 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:05:16,546 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:16,563 p=24557 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:05:16,563 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:16,575 p=24557 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:05:16,576 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:05:16,897 p=24557 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:05:16,898 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:16,925 p=24557 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:05:16,926 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:16,947 p=24557 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:05:16,948 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:16,949 p=24557 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:05:17,511 p=24557 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:05:17,511 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:18,449 p=24557 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 08:05:18,449 p=24557 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:05:18,450 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:19,113 p=24557 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-09-01 08:05:24,742 p=24557 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-09-01 08:05:30,357 p=24557 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (58 retries left). 2025-09-01 08:05:36,000 p=24557 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-09-01 08:05:36,001 p=24557 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:05:36,007 p=24557 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:05:36,007 p=24557 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 STEP: Creating backup ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:05:36.074 2025/09/01 08:05:36 Wait until backup ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 is completed backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 phase: Accepted DataUpload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 and status: Accepted 2025/09/01 08:05:56 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "7c838643-209c-49ce-8e54-76b9730a45c0", "resourceVersion": "82125", "generation": 2, "creationTimestamp": "2025-09-01T08:05:43Z", "labels": { "velero.io/async-operation-id": "du-88525e13-71b6-4776-9bc2-40d32a0a9575.b0a915f3-f806-48fcdc955", "velero.io/backup-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/backup-uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "velero.io/pvc-uid": "b0a915f3-f806-48f7-b0b5-602cc097803c" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"88525e13-71b6-4776-9bc2-40d32a0a9575\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-tpmfw", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-93-94.ec2.internal", "acceptedTimestamp": "2025-09-01T08:05:43Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 phase: Accepted DataUpload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 and status: Accepted 2025/09/01 08:06:16 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "7c838643-209c-49ce-8e54-76b9730a45c0", "resourceVersion": "82125", "generation": 2, "creationTimestamp": "2025-09-01T08:05:43Z", "labels": { "velero.io/async-operation-id": "du-88525e13-71b6-4776-9bc2-40d32a0a9575.b0a915f3-f806-48fcdc955", "velero.io/backup-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/backup-uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "velero.io/pvc-uid": "b0a915f3-f806-48f7-b0b5-602cc097803c" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"88525e13-71b6-4776-9bc2-40d32a0a9575\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-tpmfw", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-93-94.ec2.internal", "acceptedTimestamp": "2025-09-01T08:05:43Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 phase: Accepted DataUpload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 and status: Accepted 2025/09/01 08:06:36 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "7c838643-209c-49ce-8e54-76b9730a45c0", "resourceVersion": "82125", "generation": 2, "creationTimestamp": "2025-09-01T08:05:43Z", "labels": { "velero.io/async-operation-id": "du-88525e13-71b6-4776-9bc2-40d32a0a9575.b0a915f3-f806-48fcdc955", "velero.io/backup-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/backup-uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "velero.io/pvc-uid": "b0a915f3-f806-48f7-b0b5-602cc097803c" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:phase": {} } } }, { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"88525e13-71b6-4776-9bc2-40d32a0a9575\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-tpmfw", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "Accepted", "progress": {}, "acceptedByNode": "ip-10-0-93-94.ec2.internal", "acceptedTimestamp": "2025-09-01T08:05:43Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 phase: InProgress DataUpload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 and status: InProgress 2025/09/01 08:06:56 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "7c838643-209c-49ce-8e54-76b9730a45c0", "resourceVersion": "83163", "generation": 6, "creationTimestamp": "2025-09-01T08:05:43Z", "labels": { "velero.io/async-operation-id": "du-88525e13-71b6-4776-9bc2-40d32a0a9575.b0a915f3-f806-48fcdc955", "velero.io/backup-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/backup-uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "velero.io/pvc-uid": "b0a915f3-f806-48f7-b0b5-602cc097803c" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"88525e13-71b6-4776-9bc2-40d32a0a9575\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:06:49Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:nodeOS": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-tpmfw", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:06:36Z", "progress": { "totalBytes": 5073010688, "bytesDone": 1717436416 }, "node": "ip-10-0-99-76.ec2.internal", "nodeOS": "linux", "acceptedByNode": "ip-10-0-93-94.ec2.internal", "acceptedTimestamp": "2025-09-01T08:05:43Z" } } backup phase: WaitingForPluginOperations DataUpload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 phase: InProgress DataUpload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76 and status: InProgress 2025/09/01 08:07:16 { "kind": "DataUpload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-5sk76", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "7c838643-209c-49ce-8e54-76b9730a45c0", "resourceVersion": "83557", "generation": 9, "creationTimestamp": "2025-09-01T08:05:43Z", "labels": { "velero.io/async-operation-id": "du-88525e13-71b6-4776-9bc2-40d32a0a9575.b0a915f3-f806-48fcdc955", "velero.io/backup-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/backup-uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "velero.io/pvc-uid": "b0a915f3-f806-48f7-b0b5-602cc097803c" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Backup", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "88525e13-71b6-4776-9bc2-40d32a0a9575", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:05:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/backup-name": {}, "f:velero.io/backup-uid": {}, "f:velero.io/pvc-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"88525e13-71b6-4776-9bc2-40d32a0a9575\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:csiSnapshot": { ".": {}, "f:snapshotClass": {}, "f:storageClass": {}, "f:volumeSnapshot": {} }, "f:operationTimeout": {}, "f:snapshotType": {}, "f:sourceNamespace": {}, "f:sourcePVC": {} }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:07:14Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:nodeOS": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "snapshotType": "CSI", "csiSnapshot": { "volumeSnapshot": "velero-test-vm-dv-tpmfw", "storageClass": "odf-operator-cephfs", "snapshotClass": "odf-operator-cephfsplugin-snapclass" }, "sourcePVC": "test-vm-dv", "backupStorageLocation": "ts-dpa-1", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:06:36Z", "progress": { "totalBytes": 5073010688, "bytesDone": 5073010688 }, "node": "ip-10-0-99-76.ec2.internal", "nodeOS": "linux", "acceptedByNode": "ip-10-0-93-94.ec2.internal", "acceptedTimestamp": "2025-09-01T08:05:43Z" } } backup phase: Completed STEP: Verify backup ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 has completed successfully @ 09/01/25 08:07:36.241 2025/09/01 08:07:36 Backup for case ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 succeeded STEP: Delete the appplication resources ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:07:36.246 STEP: Cleanup Application for case ocp-kubevirt @ 09/01/25 08:07:36.246 2025/09/01 08:07:36 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:08:05 2025-09-01 08:07:37,740 p=24824 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:07:37,740 p=24824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:07:37,989 p=24824 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:07:37,990 p=24824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:07:38,243 p=24824 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:07:38,244 p=24824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:07:38,494 p=24824 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:07:38,494 p=24824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:07:38,508 p=24824 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:07:38,508 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:07:38,526 p=24824 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:07:38,526 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:07:38,538 p=24824 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:07:38,539 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:07:38,849 p=24824 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:07:38,849 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:07:38,876 p=24824 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:07:38,877 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:07:38,895 p=24824 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:07:38,895 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:07:38,897 p=24824 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:07:39,457 p=24824 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:07:39,457 p=24824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:08:05,301 p=24824 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** 2025-09-01 08:08:05,301 p=24824 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:08:05,301 p=24824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:08:05,470 p=24824 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:08:05,470 p=24824 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Create restore ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 from backup ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 @ 09/01/25 08:08:05.521 2025/09/01 08:08:05 Wait until restore ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7 completes restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b phase: InProgress DataDownload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b and status: InProgress 2025/09/01 08:08:25 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "ac8b2c38-1cce-4d85-a18a-ede837451fb3", "resourceVersion": "85137", "generation": 4, "creationTimestamp": "2025-09-01T08:08:08Z", "labels": { "velero.io/async-operation-id": "dd-16d88664-2c52-49bc-aa6d-c530a9ffde02.b0a915f3-f806-48f016cd5", "velero.io/restore-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/restore-uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:08Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"16d88664-2c52-49bc-aa6d-c530a9ffde02\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:19Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "7be207138a1d4e2f52e27d3639a9704e", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:08:19Z", "progress": {}, "node": "ip-10-0-99-76.ec2.internal", "acceptedByNode": "ip-10-0-56-118.ec2.internal", "acceptedTimestamp": "2025-09-01T08:08:08Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b phase: InProgress DataDownload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b and status: InProgress 2025/09/01 08:08:45 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "ac8b2c38-1cce-4d85-a18a-ede837451fb3", "resourceVersion": "85427", "generation": 6, "creationTimestamp": "2025-09-01T08:08:08Z", "labels": { "velero.io/async-operation-id": "dd-16d88664-2c52-49bc-aa6d-c530a9ffde02.b0a915f3-f806-48f016cd5", "velero.io/restore-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/restore-uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:08Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"16d88664-2c52-49bc-aa6d-c530a9ffde02\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:40Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "7be207138a1d4e2f52e27d3639a9704e", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:08:19Z", "progress": { "totalBytes": 5073010688, "bytesDone": 1109590016 }, "node": "ip-10-0-99-76.ec2.internal", "acceptedByNode": "ip-10-0-56-118.ec2.internal", "acceptedTimestamp": "2025-09-01T08:08:08Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b phase: InProgress DataDownload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b and status: InProgress 2025/09/01 08:09:05 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "ac8b2c38-1cce-4d85-a18a-ede837451fb3", "resourceVersion": "85717", "generation": 8, "creationTimestamp": "2025-09-01T08:08:08Z", "labels": { "velero.io/async-operation-id": "dd-16d88664-2c52-49bc-aa6d-c530a9ffde02.b0a915f3-f806-48f016cd5", "velero.io/restore-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/restore-uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:08Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"16d88664-2c52-49bc-aa6d-c530a9ffde02\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:09:00Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "7be207138a1d4e2f52e27d3639a9704e", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:08:19Z", "progress": { "totalBytes": 5073010688, "bytesDone": 2727542784 }, "node": "ip-10-0-99-76.ec2.internal", "acceptedByNode": "ip-10-0-56-118.ec2.internal", "acceptedTimestamp": "2025-09-01T08:08:08Z" } } restore phase: WaitingForPluginOperations DataDownload ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b phase: InProgress DataDownload Name: ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b and status: InProgress 2025/09/01 08:09:25 { "kind": "DataDownload", "apiVersion": "velero.io/v2alpha1", "metadata": { "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-wcf7b", "generateName": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-", "namespace": "openshift-adp", "uid": "ac8b2c38-1cce-4d85-a18a-ede837451fb3", "resourceVersion": "86055", "generation": 11, "creationTimestamp": "2025-09-01T08:08:08Z", "labels": { "velero.io/async-operation-id": "dd-16d88664-2c52-49bc-aa6d-c530a9ffde02.b0a915f3-f806-48f016cd5", "velero.io/restore-name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "velero.io/restore-uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02" }, "ownerReferences": [ { "apiVersion": "velero.io/v1", "kind": "Restore", "name": "ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7", "uid": "16d88664-2c52-49bc-aa6d-c530a9ffde02", "controller": true } ], "finalizers": [ "velero.io/data-upload-download-finalizer" ], "managedFields": [ { "manager": "velero", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:08:08Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:generateName": {}, "f:labels": { ".": {}, "f:velero.io/async-operation-id": {}, "f:velero.io/restore-name": {}, "f:velero.io/restore-uid": {} }, "f:ownerReferences": { ".": {}, "k:{\"uid\":\"16d88664-2c52-49bc-aa6d-c530a9ffde02\"}": {} } }, "f:spec": { ".": {}, "f:backupStorageLocation": {}, "f:nodeOS": {}, "f:operationTimeout": {}, "f:snapshotID": {}, "f:sourceNamespace": {}, "f:targetVolume": { ".": {}, "f:namespace": {}, "f:pv": {}, "f:pvc": {} } }, "f:status": { ".": {}, "f:progress": {} } } }, { "manager": "node-agent-server", "operation": "Update", "apiVersion": "velero.io/v2alpha1", "time": "2025-09-01T08:09:23Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { ".": {}, "v:\"velero.io/data-upload-download-finalizer\"": {} } }, "f:status": { "f:acceptedByNode": {}, "f:acceptedTimestamp": {}, "f:node": {}, "f:phase": {}, "f:progress": { "f:bytesDone": {}, "f:totalBytes": {} }, "f:startTimestamp": {} } } } ] }, "spec": { "targetVolume": { "pvc": "test-vm-dv", "pv": "", "namespace": "test-oadp-401" }, "backupStorageLocation": "ts-dpa-1", "snapshotID": "7be207138a1d4e2f52e27d3639a9704e", "sourceNamespace": "test-oadp-401", "operationTimeout": "10m0s", "nodeOS": "linux" }, "status": { "phase": "InProgress", "startTimestamp": "2025-09-01T08:08:19Z", "progress": { "totalBytes": 5073010688, "bytesDone": 5073010688 }, "node": "ip-10-0-99-76.ec2.internal", "acceptedByNode": "ip-10-0-56-118.ec2.internal", "acceptedTimestamp": "2025-09-01T08:08:08Z" } } restore phase: Completed STEP: Validate the application after restore @ 09/01/25 08:09:45.679 STEP: Verify Application deployment for case ocp-kubevirt @ 09/01/25 08:09:45.679 2025/09/01 08:09:45 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=4  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025/09/01 08:10:07 2025-09-01 08:09:47,202 p=25041 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:09:47,203 p=25041 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:09:47,463 p=25041 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:09:47,464 p=25041 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:09:47,719 p=25041 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:09:47,719 p=25041 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:09:47,977 p=25041 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:09:47,977 p=25041 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:09:47,992 p=25041 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:09:47,992 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:48,012 p=25041 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:09:48,012 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:48,024 p=25041 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:09:48,024 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:09:48,335 p=25041 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:09:48,335 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:48,367 p=25041 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:09:48,367 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:48,387 p=25041 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:09:48,387 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:48,388 p=25041 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:09:48,952 p=25041 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:09:48,952 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:49,937 p=25041 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to be Running & Ready (60 retries left). 2025-09-01 08:09:55,598 p=25041 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to be Running & Ready] *** 2025-09-01 08:09:55,599 p=25041 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:09:55,599 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:09:56,277 p=25041 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (60 retries left). 2025-09-01 08:10:01,999 p=25041 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait for VM to have AgentConnected status True indicating the guest agent is running (59 retries left). 2025-09-01 08:10:07,788 p=25041 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Wait for VM to have AgentConnected status True indicating the guest agent is running] *** 2025-09-01 08:10:07,788 p=25041 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:07,792 p=25041 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:10:07,792 p=25041 u=1002790000 n=ansible INFO| localhost : ok=17 changed=4 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-401] [kubevirt] Started VM should over ceph filesytem mode @ 09/01/25 08:10:07.853 (6m23.905s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:10:07.853 2025/09/01 08:10:07 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:10:07.853 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:07.853 2025/09/01 08:10:07 Cleaning app 2025/09/01 08:10:07 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:10:32 2025-09-01 08:10:09,641 p=25309 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:10:09,641 p=25309 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:09,915 p=25309 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:10:09,915 p=25309 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:10,210 p=25309 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:10:10,211 p=25309 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:10,491 p=25309 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:10:10,491 p=25309 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:10,506 p=25309 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:10:10,507 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:10,526 p=25309 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:10:10,526 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:10,541 p=25309 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:10:10,541 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:10:10,883 p=25309 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:10:10,883 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:10,918 p=25309 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:10:10,919 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:10,939 p=25309 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:10:10,940 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:10,941 p=25309 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:10:11,542 p=25309 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:10:11,542 p=25309 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:32,460 p=25309 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-kubevirt : Remove namespace test-oadp-401] *** 2025-09-01 08:10:32,461 p=25309 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:10:32,461 p=25309 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:32,630 p=25309 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:10:32,630 p=25309 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:32.681 (24.828s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:32.681 2025/09/01 08:10:32 Cleaning setup resources for the backup 2025/09/01 08:10:32 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:10:32 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:10:32 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:32.804 (122ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:32.804 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:10:32.816 (13ms) • [408.868 seconds] ------------------------------ [AfterSuite]  /alabama/cspi/e2e/kubevirt-plugin/kubevirt_suite_test.go:105 > Enter [AfterSuite] TOP-LEVEL @ 09/01/25 08:10:32.816 < Exit [AfterSuite] TOP-LEVEL @ 09/01/25 08:10:32.828 (11ms) [AfterSuite] PASSED [0.011 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo > Enter [ReportAfterSuite] TOP-LEVEL @ 09/01/25 08:10:32.828 < Exit [ReportAfterSuite] TOP-LEVEL @ 09/01/25 08:10:32.831 (4ms) [ReportAfterSuite] PASSED [0.004 seconds] ------------------------------ Ran 4 of 5 Specs in 949.961 seconds SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 1 Skipped PASS Ginkgo ran 1 suite in 16m51.474274917s Test Suite Passed + readonly 'RED=\e[31m' + RED='\e[31m' + readonly 'BLUE=\033[34m' + BLUE='\033[34m' + readonly 'CLEAR=\e[39m' + CLEAR='\e[39m' ++ oc get infrastructures cluster -o 'jsonpath={.status.platform}' ++ awk '{print tolower($0)}' + CLOUD_PROVIDER=aws + [[ '' == \t\r\u\e ]] + echo /home/jenkins/.kube/config /home/jenkins/.kube/config + [[ aws == *-arm* ]] + [[ aws == *-fips* ]] + E2E_TIMEOUT_MULTIPLIER=2 + export NAMESPACE=openshift-adp + NAMESPACE=openshift-adp + export PROVIDER=aws + PROVIDER=aws ++ echo aws ++ awk '{print tolower($0)}' + BACKUP_LOCATION=aws + export BACKUP_LOCATION=aws + BACKUP_LOCATION=aws + export BUCKET=ci-op-cl9vhfrj-interopoadp + BUCKET=ci-op-cl9vhfrj-interopoadp + OADP_CREDS_FILE=/tmp/test-settings/credentials + OADP_VSL_CREDS_FILE=/tmp/test-settings/aws_vsl_creds +++ readlink -f /alabama/cspi/test_settings/scripts/test_runner.sh ++ dirname /alabama/cspi/test_settings/scripts/test_runner.sh + readonly SCRIPT_DIR=/alabama/cspi/test_settings/scripts + SCRIPT_DIR=/alabama/cspi/test_settings/scripts ++ cd /alabama/cspi/test_settings/scripts ++ git rev-parse --show-toplevel + readonly TOP_DIR=/alabama/cspi + TOP_DIR=/alabama/cspi + echo /alabama/cspi /alabama/cspi + TESTS_FOLDER=/alabama/cspi/e2e ++ oc get nodes -o 'jsonpath={.items[*].metadata.labels.kubernetes\.io/arch}' ++ tr ' ' '\n' ++ sort -u ++ xargs + export NODES_ARCHITECTURE=amd64 + NODES_ARCHITECTURE=amd64 + export OADP_REPOSITORY=redhat + OADP_REPOSITORY=redhat + SKIP_DPA_CREATION=false ++ oc get ns openshift-storage ++ echo true + OPENSHIFT_STORAGE=true + '[' redhat == upstream-velero ']' + '[' true == true ']' ++ oc get sc ++ awk '$1 ~ /^.+ceph-rbd$/ {print $1}' ++ tail -1 + CEPH_RBD_STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + export CEPH_RBD_STORAGE_CLASS + echo 'ceph-rbd StorageClass found: odf-operator-ceph-rbd' ceph-rbd StorageClass found: odf-operator-ceph-rbd ++ oc get storageclass -o 'jsonpath={range .items[*]}{@.metadata.name}{" "}{@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class}{"\n"}{end}' ++ awk '$2=="true"{print $1}' ++ wc -l + NUM_DEFAULT_STORAGE_CLASS=1 + '[' 1 -ne 1 ']' ++ oc get storageclass -o 'jsonpath={.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=='\''true'\'')].metadata.name}' + DEFAULT_SC=odf-operator-ceph-rbd + export STORAGE_CLASS=odf-operator-ceph-rbd + STORAGE_CLASS=odf-operator-ceph-rbd + '[' -n odf-operator-ceph-rbd ']' + '[' odf-operator-ceph-rbd '!=' odf-operator-ceph-rbd ']' + export STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + STORAGE_CLASS_OPENSHIFT_STORAGE=odf-operator-ceph-rbd + echo 'Using the StorageClass for openshift-storage: odf-operator-ceph-rbd' Using the StorageClass for openshift-storage: odf-operator-ceph-rbd + [[ amd64 != \a\m\d\6\4 ]] + TEST_FILTER='!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' + [[ aws =~ ^osp ]] + [[ aws =~ ^vsphere ]] + [[ aws =~ ^gcp-wif ]] + [[ aws =~ ^ibmcloud ]] ++ oc config current-context ++ awk -F / '{print $2}' + SETTINGS_TMP=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 + mkdir -p /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 ++ oc get authentication cluster -o 'jsonpath={.spec.serviceAccountIssuer}' + IS_OIDC= + '[' '!' -z ']' + [[ aws == \a\w\s ]] + export PROVIDER=aws + PROVIDER=aws + export CREDS_SECRET_REF=cloud-credentials + CREDS_SECRET_REF=cloud-credentials ++ oc get infrastructures cluster -o 'jsonpath={.status.platformStatus.aws.region}' --allow-missing-template-keys=false + export REGION=us-east-1 + REGION=us-east-1 + settings_script=aws_settings.sh + '[' aws == aws-sts ']' + BUCKET=ci-op-cl9vhfrj-interopoadp + TMP_DIR=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443 + source /alabama/cspi/test_settings/scripts/aws_settings.sh ++ cat ++ [[ aws == *aws* ]] ++ cat ++ echo -e '\n }\n}' +++ cat /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json ++ x='{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-cl9vhfrj-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ echo '{ "metadata": { "namespace": "openshift-adp" }, "spec": { "configuration":{ "velero":{ "defaultPlugins": [ "openshift", "aws" ] } }, "backupLocations": [ { "velero": { "provider": "aws", "default": true, "config": { "region": "us-east-1" }, "credential":{ "name": "cloud-credentials", "key": "cloud" }, "objectStorage":{ "bucket": "ci-op-cl9vhfrj-interopoadp" } } } ] , "snapshotLocations": [ { "velero": { "provider": "aws", "config": { "profile": "default", "region": "us-east-1" } } } ] } }' ++ grep -o '^[^#]*' + FILE_SETTINGS_NAME=settings.json + printf '\033[34mGenerated settings file under /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json\e[39m\n' Generated settings file under /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json + cat /alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json ++ oc get volumesnapshotclass -o name + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/csi-aws-vsc annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-cephfsplugin-snapclass annotated + for i in $(oc get volumesnapshotclass -o name) + oc annotate volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass snapshot.storage.kubernetes.io/is-default-class- volumesnapshotclass.snapshot.storage.k8s.io/odf-operator-rbdplugin-snapclass annotated ++ ./e2e/must-gather/get-latest-build.sh + oc get configmaps -n default must-gather-image + UPSTREAM_VERSION=99.0.0 ++ oc get OperatorCondition -n openshift-adp -o 'jsonpath={.items[*].metadata.name}' ++ awk -F v '{print $2}' + OADP_VERSION=1.5.0 + '[' -z 1.5.0 ']' + '[' 1.5.0 == 99.0.0 ']' ++ oc get sub redhat-oadp-operator -n openshift-adp -o 'jsonpath={.spec.source}' + OADP_REPO=redhat-operators + '[' -z redhat-operators ']' + '[' redhat-operators == redhat-operators ']' + REGISTRY_PATH=registry.redhat.io/oadp/oadp-mustgather-rhel9: + TAG=1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + echo registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + export MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + MUST_GATHER_BUILD=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 + '[' -z registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 ']' + export NUM_OF_OADP_INSTANCES=1 + NUM_OF_OADP_INSTANCES=1 ++ echo --focus=interop ++ tr ' ' '\n' ++ grep '^--' ++ tr '\n' ' ' + GINKO_PARAM='--focus=interop ' ++ echo --focus=interop ++ tr ' ' '\n' ++ grep '^-' ++ grep -v '^--' ++ tr '\n' ' ' + TEST_PARAM= + ginkgo run --nodes=1 -mod=mod --show-node-events --flake-attempts 3 --junit-report=/logs/artifacts/junit_oadp_interop_results.xml '--label-filter=!// || (// && !exclude_aws && (!/target/ || target_aws) ) ' --focus=interop -p /alabama/cspi/e2e/ -- -credentials_file=/tmp/test-settings/credentials -vsl_credentials_file=/tmp/test-settings/aws_vsl_creds -oadp_namespace=openshift-adp -settings=/alabama/cspi/output_files/api-ci-op-cl9vhfrj-b2a90-cspilp-interop-ccitredhat-com:6443/settings.json -must_gather_image=registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 -timeout_multiplier=2 -skip_dpa_creation=false 2025/09/01 08:10:34 maxprocs: Leaving GOMAXPROCS=16: CPU quota undefined Ginkgo detected a version mismatch between the Ginkgo CLI and the version of Ginkgo imported by your packages: Ginkgo CLI Version: 2.25.2 Mismatched package versions found: 2.23.4 used by e2e Ginkgo will continue to attempt to run but you may see errors (including flag parsing errors) and should either update your go.mod or your version of the Ginkgo CLI to match. To install the matching version of the CLI run go install github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file. Alternatively you can use go run github.com/onsi/ginkgo/v2/ginkgo from a path that contains a go.mod file to invoke the matching version of the Ginkgo CLI. If you are attempting to test multiple packages that each have a different version of the Ginkgo library with a single Ginkgo CLI that is currently unsupported.  2025/09/01 08:10:40 Setting up clients 2025/09/01 08:10:40 Getting default StorageClass... 2025/09/01 08:10:40 Checking default storage class count Run the command: oc get sc 2025/09/01 08:10:40 Got default StorageClass odf-operator-ceph-rbd 2025/09/01 08:10:40 oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 85m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 85m odf-operator-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 22m odf-operator-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 22m odf-operator-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 22m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 18m 2025/09/01 08:10:40 Using velero prefix: velero-e2e-261040c6-870b-11f0-8ef4-0a580a81b6e7 2025/09/01 08:10:40 Checking default storage class count Running Suite: OADP E2E Suite - /alabama/cspi/e2e ================================================= Random Seed: 1756714234 Will run 8 of 227 specs ------------------------------ [SynchronizedBeforeSuite]  /alabama/cspi/e2e/e2e_suite_test.go:84 > Enter [SynchronizedBeforeSuite] TOP-LEVEL @ 09/01/25 08:10:40.158 2025/09/01 08:10:40 Error getting credentials secret: secrets "cloud-credentials" not found < Exit [SynchronizedBeforeSuite] TOP-LEVEL @ 09/01/25 08:10:40.162 (3ms) > Enter [SynchronizedBeforeSuite] TOP-LEVEL @ 09/01/25 08:10:40.162 2025/09/01 08:10:40 The VSL credentials file: /tmp/test-settings/aws_vsl_creds doesn't exists 2025/09/01 08:10:40 The error message is: stat /tmp/test-settings/aws_vsl_creds: no such file or directory < Exit [SynchronizedBeforeSuite] TOP-LEVEL @ 09/01/25 08:10:40.178 (16ms) [SynchronizedBeforeSuite] PASSED [0.019 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [datamover] DataMover: Backup/Restore stateful application with CSI  [tc-id:OADP-439][interop] MySQL application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:34 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:10:40.18 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:10:40.185 (6ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:10:40.186 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:10:40.186 (0s) > Enter [It] [tc-id:OADP-439][interop] MySQL application @ 09/01/25 08:10:40.186 2025/09/01 08:10:40 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 08:10:40.191 2025/09/01 08:10:40 native-datamover 2025/09/01 08:10:40 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "3836a45e-f38c-448d-91c4-1e7d8c8d462b", "resourceVersion": "87369", "generation": 1, "creationTimestamp": "2025-09-01T08:10:40Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:10:40Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:10:40.217 2025/09/01 08:10:40 Waiting for velero pod to be running 2025/09/01 08:10:45 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:10:45.242 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:10:45 The 'openshift-storage' namespace exists 2025/09/01 08:10:45 Checking default storage class count 2025/09/01 08:10:45 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:10:45 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:10:45 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:10:45 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:10:45 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 09/01/25 08:10:45.574 2025/09/01 08:10:45 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-439] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). FAILED - RETRYING: [localhost]: Check pod status (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:12:00 2025-09-01 08:10:47,065 p=26609 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:10:47,065 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:47,315 p=26609 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:10:47,315 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:47,572 p=26609 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:10:47,572 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:47,825 p=26609 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:10:47,825 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:47,839 p=26609 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:10:47,839 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:47,857 p=26609 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:10:47,858 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:47,871 p=26609 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:10:47,871 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:10:48,184 p=26609 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:10:48,185 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:48,213 p=26609 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:10:48,213 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:48,230 p=26609 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:10:48,230 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:48,232 p=26609 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:10:48,799 p=26609 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:10:48,799 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:49,608 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-439] *** 2025-09-01 08:10:49,608 p=26609 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:10:49,608 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:10:49,994 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-09-01 08:10:49,995 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:50,917 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-09-01 08:10:50,917 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:10:51,585 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-09-01 08:10:57,213 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-09-01 08:11:02,839 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (28 retries left). 2025-09-01 08:11:08,482 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-09-01 08:11:08,482 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:11:09,125 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-09-01 08:11:09,126 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:11:09,452 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-09-01 08:11:14,761 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-09-01 08:11:20,040 p=26609 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-09-01 08:11:25,320 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:11:25,320 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:11:27,149 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-09-01 08:11:27,149 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:11:29,937 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-09-01 08:11:29,937 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:11:30,619 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-09-01 08:11:30,619 p=26609 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:11:30,638 p=26609 u=1002790000 n=ansible INFO| Pausing for 30 seconds 2025-09-01 08:12:00,641 p=26609 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-09-01 08:12:00,641 p=26609 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:00,752 p=26609 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:12:00,752 p=26609 u=1002790000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:12:00.803 2025/09/01 08:12:00 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:12:06 2025-09-01 08:12:02,286 p=27188 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:12:02,286 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:02,537 p=27188 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:12:02,537 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:02,793 p=27188 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:12:02,794 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:03,051 p=27188 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:12:03,052 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:03,066 p=27188 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:12:03,066 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:03,085 p=27188 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:12:03,085 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:03,097 p=27188 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:12:03,097 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:12:03,413 p=27188 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:12:03,413 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:03,439 p=27188 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:12:03,440 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:03,457 p=27188 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:12:03,457 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:03,459 p=27188 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:12:04,019 p=27188 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:12:04,019 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:05,018 p=27188 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:12:05,018 p=27188 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:05,434 p=27188 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:12:05,434 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:05,981 p=27188 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:12:05,982 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:06,635 p=27188 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:12:06,635 p=27188 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:06,639 p=27188 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:12:06,639 p=27188 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/09/01 08:12:06 {{ } { } [{{ } {mysql-data test-oadp-439 dc008a96-ed84-403f-8930-4cc0a76a7cd0 87715 0 2025-09-01 08:10:50 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1756714251 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:10:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-dc008a96-ed84-403f-8930-4cc0a76a7cd0 0xc000c595d0 0xc000c595e0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-439 8a5cc7d9-9e69-4534-a9eb-26602d6e4797 87732 0 2025-09-01 08:10:50 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1756714251 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:10:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:10:51 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-8a5cc7d9-9e69-4534-a9eb-26602d6e4797 0xc000c59740 0xc000c59750 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:12:06.699 2025/09/01 08:12:06 Wait until backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 is completed backup phase: WaitingForPluginOperations backup phase: Completed 2025/09/01 08:12:46 Validating 2 DataUploads for backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 2025/09/01 08:12:46 DataUpload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-j4frv has phase: Completed 2025/09/01 08:12:46 apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: creationTimestamp: "2025-09-01T08:12:13Z" generateName: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7- generation: 7 labels: velero.io/async-operation-id: du-28c19e77-fddb-43d2-a6c5-e71c7f32226b.dc008a96-ed84-40359d17f velero.io/backup-name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 28c19e77-fddb-43d2-a6c5-e71c7f32226b velero.io/pvc-uid: dc008a96-ed84-403f-8930-4cc0a76a7cd0 managedFields: - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/async-operation-id: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"28c19e77-fddb-43d2-a6c5-e71c7f32226b"}: {} f:spec: .: {} f:backupStorageLocation: {} f:csiSnapshot: .: {} f:snapshotClass: {} f:storageClass: {} f:volumeSnapshot: {} f:operationTimeout: {} f:snapshotType: {} f:sourceNamespace: {} f:sourcePVC: {} f:status: .: {} f:progress: {} manager: velero operation: Update time: "2025-09-01T08:12:13Z" - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:status: f:acceptedByNode: {} f:acceptedTimestamp: {} f:completionTimestamp: {} f:node: {} f:nodeOS: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:12:32Z" name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-j4frv namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 uid: 28c19e77-fddb-43d2-a6c5-e71c7f32226b resourceVersion: "89379" uid: 7cd4e7b2-eaec-4931-af63-709c895076c4 spec: backupStorageLocation: ts-dpa-1 csiSnapshot: snapshotClass: example-snapclass storageClass: odf-operator-ceph-rbd volumeSnapshot: velero-mysql-data-lrhcc operationTimeout: 10m0s snapshotType: CSI sourceNamespace: test-oadp-439 sourcePVC: mysql-data status: acceptedByNode: ip-10-0-99-76.ec2.internal acceptedTimestamp: "2025-09-01T08:12:13Z" completionTimestamp: "2025-09-01T08:12:32Z" node: ip-10-0-99-76.ec2.internal nodeOS: linux path: /7cd4e7b2-eaec-4931-af63-709c895076c4 phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: 1bc53a59c3fc26fc2b7796a37b63db43 startTimestamp: "2025-09-01T08:12:24Z" 2025/09/01 08:12:46 DataUpload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-j4frv completed successfully 2025/09/01 08:12:46 DataUpload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-tbjbq has phase: Completed 2025/09/01 08:12:46 apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: creationTimestamp: "2025-09-01T08:12:18Z" generateName: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7- generation: 7 labels: velero.io/async-operation-id: du-28c19e77-fddb-43d2-a6c5-e71c7f32226b.8a5cc7d9-9e69-453e74a60 velero.io/backup-name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 28c19e77-fddb-43d2-a6c5-e71c7f32226b velero.io/pvc-uid: 8a5cc7d9-9e69-4534-a9eb-26602d6e4797 managedFields: - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/async-operation-id: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"28c19e77-fddb-43d2-a6c5-e71c7f32226b"}: {} f:spec: .: {} f:backupStorageLocation: {} f:csiSnapshot: .: {} f:snapshotClass: {} f:storageClass: {} f:volumeSnapshot: {} f:operationTimeout: {} f:snapshotType: {} f:sourceNamespace: {} f:sourcePVC: {} f:status: .: {} f:progress: {} manager: velero operation: Update time: "2025-09-01T08:12:18Z" - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:status: f:acceptedByNode: {} f:acceptedTimestamp: {} f:completionTimestamp: {} f:node: {} f:nodeOS: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:12:40Z" name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-tbjbq namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 uid: 28c19e77-fddb-43d2-a6c5-e71c7f32226b resourceVersion: "89548" uid: b01fb34d-4e69-44d2-91f4-6ffd45badc12 spec: backupStorageLocation: ts-dpa-1 csiSnapshot: snapshotClass: example-snapclass storageClass: odf-operator-ceph-rbd volumeSnapshot: velero-mysql-data1-kvcq9 operationTimeout: 10m0s snapshotType: CSI sourceNamespace: test-oadp-439 sourcePVC: mysql-data1 status: acceptedByNode: ip-10-0-93-94.ec2.internal acceptedTimestamp: "2025-09-01T08:12:18Z" completionTimestamp: "2025-09-01T08:12:40Z" node: ip-10-0-93-94.ec2.internal nodeOS: linux path: /b01fb34d-4e69-44d2-91f4-6ffd45badc12 phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 snapshotID: f1c92eb83852f77aeeaad451f9155650 startTimestamp: "2025-09-01T08:12:33Z" 2025/09/01 08:12:46 DataUpload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-tbjbq completed successfully 2025/09/01 08:12:46 All 2 DataUploads completed successfully for backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 STEP: Verify backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:12:46.738 2025/09/01 08:12:46 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 09/01/25 08:12:46.791 STEP: Cleanup Application for case mysql @ 09/01/25 08:12:46.791 2025/09/01 08:12:46 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:13:16 2025-09-01 08:12:48,321 p=27512 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:12:48,321 p=27512 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:48,586 p=27512 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:12:48,586 p=27512 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:48,856 p=27512 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:12:48,856 p=27512 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:49,143 p=27512 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:12:49,143 p=27512 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:12:49,157 p=27512 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:12:49,158 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:49,177 p=27512 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:12:49,177 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:49,190 p=27512 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:12:49,190 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:12:49,500 p=27512 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:12:49,500 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:49,531 p=27512 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:12:49,531 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:49,551 p=27512 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:12:49,551 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:12:49,553 p=27512 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:12:50,129 p=27512 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:12:50,129 p=27512 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:16,012 p=27512 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** 2025-09-01 08:13:16,013 p=27512 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:13:16,013 p=27512 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:13:16,315 p=27512 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:13:16,315 p=27512 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/09/01 08:13:16 Creating restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 for case mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 STEP: Create restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 from backup mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:13:16.371 2025/09/01 08:13:16 Wait until restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 is complete restore phase: WaitingForPluginOperations restore phase: WaitingForPluginOperations restore phase: Finalizing restore phase: Completed 2025/09/01 08:13:56 Validating 2 DataDownloads for restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 2025/09/01 08:13:56 DataDownload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-9bf5r has phase: Completed 2025/09/01 08:13:56 apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: creationTimestamp: "2025-09-01T08:13:18Z" generateName: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7- generation: 6 labels: velero.io/async-operation-id: dd-1432fd40-8533-469a-b6b0-c5e8d170a23c.8a5cc7d9-9e69-4535ec35a velero.io/restore-name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 1432fd40-8533-469a-b6b0-c5e8d170a23c managedFields: - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/async-operation-id: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"1432fd40-8533-469a-b6b0-c5e8d170a23c"}: {} f:spec: .: {} f:backupStorageLocation: {} f:nodeOS: {} f:operationTimeout: {} f:snapshotID: {} f:sourceNamespace: {} f:targetVolume: .: {} f:namespace: {} f:pv: {} f:pvc: {} f:status: .: {} f:progress: {} manager: velero operation: Update time: "2025-09-01T08:13:18Z" - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:status: f:acceptedByNode: {} f:acceptedTimestamp: {} f:completionTimestamp: {} f:node: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:13:36Z" name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-9bf5r namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 uid: 1432fd40-8533-469a-b6b0-c5e8d170a23c resourceVersion: "90641" uid: 06f01762-8618-42f0-9905-57ec6ba82610 spec: backupStorageLocation: ts-dpa-1 nodeOS: linux operationTimeout: 10m0s snapshotID: f1c92eb83852f77aeeaad451f9155650 sourceNamespace: test-oadp-439 targetVolume: namespace: test-oadp-439 pv: "" pvc: mysql-data1 status: acceptedByNode: ip-10-0-93-94.ec2.internal acceptedTimestamp: "2025-09-01T08:13:18Z" completionTimestamp: "2025-09-01T08:13:36Z" node: ip-10-0-93-94.ec2.internal phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 startTimestamp: "2025-09-01T08:13:24Z" 2025/09/01 08:13:56 DataDownload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-9bf5r completed successfully 2025/09/01 08:13:56 DataDownload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-sggwk has phase: Completed 2025/09/01 08:13:56 apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: creationTimestamp: "2025-09-01T08:13:18Z" generateName: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7- generation: 6 labels: velero.io/async-operation-id: dd-1432fd40-8533-469a-b6b0-c5e8d170a23c.dc008a96-ed84-40376210c velero.io/restore-name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 1432fd40-8533-469a-b6b0-c5e8d170a23c managedFields: - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/async-operation-id: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"1432fd40-8533-469a-b6b0-c5e8d170a23c"}: {} f:spec: .: {} f:backupStorageLocation: {} f:nodeOS: {} f:operationTimeout: {} f:snapshotID: {} f:sourceNamespace: {} f:targetVolume: .: {} f:namespace: {} f:pv: {} f:pvc: {} f:status: .: {} f:progress: {} manager: velero operation: Update time: "2025-09-01T08:13:18Z" - apiVersion: velero.io/v2alpha1 fieldsType: FieldsV1 fieldsV1: f:status: f:acceptedByNode: {} f:acceptedTimestamp: {} f:completionTimestamp: {} f:node: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:13:35Z" name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-sggwk namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 uid: 1432fd40-8533-469a-b6b0-c5e8d170a23c resourceVersion: "90628" uid: 37bb08ff-9a32-4786-beea-1c70293d957f spec: backupStorageLocation: ts-dpa-1 nodeOS: linux operationTimeout: 10m0s snapshotID: 1bc53a59c3fc26fc2b7796a37b63db43 sourceNamespace: test-oadp-439 targetVolume: namespace: test-oadp-439 pv: "" pvc: mysql-data status: acceptedByNode: ip-10-0-99-76.ec2.internal acceptedTimestamp: "2025-09-01T08:13:18Z" completionTimestamp: "2025-09-01T08:13:35Z" node: ip-10-0-99-76.ec2.internal phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-09-01T08:13:25Z" 2025/09/01 08:13:56 DataDownload mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-sggwk completed successfully 2025/09/01 08:13:56 All 2 DataDownloads completed successfully for restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7 STEP: Verify restore mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:13:56.43 STEP: Verify Application restore @ 09/01/25 08:13:56.433 STEP: Verify Application deployment for case mysql @ 09/01/25 08:13:56.433 2025/09/01 08:13:56 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:14:02 2025-09-01 08:13:57,995 p=27736 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:13:57,995 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:13:58,259 p=27736 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:13:58,259 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:13:58,540 p=27736 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:13:58,540 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:13:58,801 p=27736 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:13:58,801 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:13:58,816 p=27736 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:13:58,816 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:58,834 p=27736 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:13:58,834 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:58,847 p=27736 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:13:58,848 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:13:59,166 p=27736 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:13:59,166 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:59,195 p=27736 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:13:59,195 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:59,215 p=27736 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:13:59,216 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:13:59,218 p=27736 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:13:59,810 p=27736 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:13:59,810 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:00,868 p=27736 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:14:00,868 p=27736 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:01,320 p=27736 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:14:01,320 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:01,877 p=27736 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:14:01,877 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:02,624 p=27736 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:14:02,624 p=27736 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:02,629 p=27736 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:14:02,629 p=27736 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-439][interop] MySQL application @ 09/01/25 08:14:02.696 (3m22.511s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:14:02.696 2025/09/01 08:14:02 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:14:02.696 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:02.696 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:02.7 (4ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:02.7 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:02.701 (0s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:02.701 2025/09/01 08:14:02 Cleaning app 2025/09/01 08:14:02 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:14:27 2025-09-01 08:14:04,249 p=28055 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:14:04,250 p=28055 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:04,513 p=28055 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:14:04,513 p=28055 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:04,771 p=28055 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:14:04,772 p=28055 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:05,035 p=28055 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:14:05,035 p=28055 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:05,051 p=28055 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:14:05,052 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:05,070 p=28055 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:14:05,070 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:05,082 p=28055 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:14:05,083 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:14:05,402 p=28055 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:14:05,402 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:05,430 p=28055 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:14:05,430 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:05,448 p=28055 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:14:05,448 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:05,449 p=28055 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:14:06,008 p=28055 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:14:06,008 p=28055 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:26,823 p=28055 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-439] *** 2025-09-01 08:14:26,824 p=28055 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:14:26,824 p=28055 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:27,105 p=28055 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:14:27,105 p=28055 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:27.155 (24.454s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:27.155 2025/09/01 08:14:27 Cleaning setup resources for the backup 2025/09/01 08:14:27 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:14:27 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:14:27 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:27.2 (45ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:27.2 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:14:27.208 (8ms) • [227.029 seconds] ------------------------------ [datamover] DataMover: Backup/Restore stateful application with CSI  [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:50 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:14:27.208 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:14:27.22 (12ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:14:27.22 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:14:27.22 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:14:27.22 2025/09/01 08:14:27 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 08:14:27.224 2025/09/01 08:14:27 native-datamover 2025/09/01 08:14:27 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "5ebdd3e0-96b7-466c-8958-c6d8fca9c6c8", "resourceVersion": "91548", "generation": 1, "creationTimestamp": "2025-09-01T08:14:27Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:14:27Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:14:27.282 2025/09/01 08:14:27 Waiting for velero pod to be running 2025/09/01 08:14:27 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:14:27 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "5ebdd3e0-96b7-466c-8958-c6d8fca9c6c8", "resourceVersion": "91548", "generation": 1, "creationTimestamp": "2025-09-01T08:14:27Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:14:27Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:14:32.311 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:14:32 The 'openshift-storage' namespace exists 2025/09/01 08:14:32 Checking default storage class count 2025/09/01 08:14:32 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:14:32 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:14:32 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:14:32 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:14:32 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 09/01/25 08:14:32.553 2025/09/01 08:14:32 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.181780", "end": "2025-09-01 08:17:48.239758", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:17:48.057978", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:17:48 2025-09-01 08:14:34,063 p=28300 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:14:34,063 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:34,319 p=28300 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:14:34,320 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:34,578 p=28300 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:14:34,579 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:34,846 p=28300 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:14:34,846 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:34,861 p=28300 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:14:34,861 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:34,879 p=28300 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:14:34,879 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:34,890 p=28300 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:14:34,891 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:14:35,226 p=28300 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:14:35,226 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:35,257 p=28300 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:14:35,258 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:35,275 p=28300 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:14:35,275 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:35,277 p=28300 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:14:35,847 p=28300 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:14:35,847 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:36,687 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:14:36,688 p=28300 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:14:36,688 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:37,103 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:14:37,103 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:37,415 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:14:37,415 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:38,237 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:14:38,237 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:38,939 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:14:38,939 p=28300 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:14:38,939 p=28300 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:14:39,616 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:14:45,254 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-09-01 08:14:50,908 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:14:50,908 p=28300 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:14:53,646 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:15:01,342 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:15:06,739 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:15:12,442 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:15:20,138 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:15:25,500 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:15:30,858 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:15:36,704 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:15:42,053 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:15:51,240 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:15:56,604 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:16:01,964 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:16:07,310 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:16:12,699 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:16:18,084 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:16:23,439 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:16:28,798 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:16:34,218 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:16:39,586 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:16:48,737 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:16:54,089 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:16:59,451 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:17:04,810 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:17:10,172 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:17:15,541 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:17:20,904 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:17:26,264 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:17:31,601 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:17:36,972 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:17:42,911 p=28300 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:17:48,260 p=28300 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:17:48,260 p=28300 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.181780", "end": "2025-09-01 08:17:48.239758", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:17:48.057978", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:17:48,261 p=28300 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:17:48,261 p=28300 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/09/01 08:17:48 LAST SEEN TYPE REASON OBJECT MESSAGE 3m9s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m9s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m9s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-56-118.ec2.internal 3m9s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-4d3c0c9c-afbb-4909-b337-ee433b17beab" 3m5s Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.63/23] from ovn-kubernetes 65s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 3m1s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.75s (3.75s including waiting). Image size: 307783610 bytes. 64s Normal Created pod/cassandra-0 Created container: cassandra 64s Normal Started pod/cassandra-0 Started container cassandra 2m54s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 582ms (582ms including waiting). Image size: 307783610 bytes. 6s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(67db41f3-ff5f-4a9e-b010-d7e39251f001) 2m35s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 452ms (452ms including waiting). Image size: 307783610 bytes. 2m3s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 800ms (800ms including waiting). Image size: 307783610 bytes. 65s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 468ms (468ms including waiting). Image size: 307783610 bytes. 3m Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m59s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m59s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-93-94.ec2.internal 3m Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-5cab4426-c19a-45f4-bfd0-e31c200118ac" 2m53s Normal AddedInterface pod/cassandra-1 Add eth0 [10.129.2.68/23] from ovn-kubernetes 53s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m49s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.652s (3.652s including waiting). Image size: 307783610 bytes. 52s Normal Created pod/cassandra-1 Created container: cassandra 52s Normal Started pod/cassandra-1 Started container cassandra 2m43s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 789ms (789ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(d277db9b-5e59-4f3d-b8c3-649d42f8323f) 2m24s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 463ms (463ms including waiting). Image size: 307783610 bytes. 110s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 487ms (487ms including waiting). Image size: 307783610 bytes. 52s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 569ms (569ms including waiting). Image size: 307783610 bytes. 2m48s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m47s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m47s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-99-76.ec2.internal 2m47s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-cce022d1-7434-4e28-b2ef-d710559657a8" 2m42s Normal AddedInterface pod/cassandra-2 Add eth0 [10.128.2.109/23] from ovn-kubernetes 46s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m38s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 3.62s (3.62s including waiting). Image size: 307783610 bytes. 45s Normal Created pod/cassandra-2 Created container: cassandra 45s Normal Started pod/cassandra-2 Started container cassandra 2m30s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 387ms (387ms including waiting). Image size: 307783610 bytes. 1s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(84cd0a4e-bc87-4d1d-8ae9-d75f24d01fd5) 2m12s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 511ms (511ms including waiting). Image size: 307783610 bytes. 101s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 414ms (414ms including waiting). Image size: 307783610 bytes. 46s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 419ms (419ms including waiting). Image size: 307783610 bytes. 3m10s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m10s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m9s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-4d3c0c9c-afbb-4909-b337-ee433b17beab 3m Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 3m Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-5cab4426-c19a-45f4-bfd0-e31c200118ac 2m48s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m48s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m48s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-cce022d1-7434-4e28-b2ef-d710559657a8 3m10s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m10s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 3m Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 3m Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m48s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m48s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:17:48.419 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:17:48.419 (3m21.198s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:17:48.419 2025/09/01 08:17:48 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:17:48.419 2025/09/01 08:17:48 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:17:48.419 2025/09/01 08:17:48 The folder logs does not exists, creating new folder with the name: logs STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:17:48.419 2025/09/01 08:17:48 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application does not exists, creating new folder with the name: logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Run must-gather because the spec failed @ 09/01/25 08:17:48.419 2025/09/01 08:17:48 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:17:48 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:18:45 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:18:45 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:18:45.265 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:18:45.265 (56.846s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:18:45.265 2025/09/01 08:18:45 Cleaning app 2025/09/01 08:18:45 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:19:09 2025-09-01 08:18:46,757 p=29704 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:18:46,758 p=29704 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:18:47,015 p=29704 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:18:47,016 p=29704 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:18:47,266 p=29704 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:18:47,266 p=29704 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:18:47,516 p=29704 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:18:47,516 p=29704 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:18:47,530 p=29704 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:18:47,530 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:18:47,547 p=29704 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:18:47,547 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:18:47,558 p=29704 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:18:47,559 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:18:47,869 p=29704 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:18:47,869 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:18:47,898 p=29704 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:18:47,898 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:18:47,915 p=29704 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:18:47,915 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:18:47,917 p=29704 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:18:48,485 p=29704 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:18:48,485 p=29704 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:09,296 p=29704 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-09-01 08:19:09,297 p=29704 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:19:09,297 p=29704 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:09,633 p=29704 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:19:09,634 p=29704 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:19:09.682 (24.418s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:19:09.682 2025/09/01 08:19:09 Cleaning setup resources for the backup 2025/09/01 08:19:09 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:19:09 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:19:09 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:19:09.713 (30ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:19:09.713 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:19:09.727 (15ms) Attempt #1 Failed. Retrying ↺ @ 09/01/25 08:19:09.728 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:19:09.728 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:19:09.752 (24ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:19:09.752 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:19:09.752 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:19:09.752 2025/09/01 08:19:09 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-090c1bce-a76c-4fb7-a556-b671d89bc921 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-0da21ebe-f4aa-4110-afcc-935c39aa077f mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-3d2ace0f-a333-4bfd-a4b5-1308a8f6ccd9 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-4d1adc46-1542-405f-aa0e-91253773590a mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-6468f9b9-5ffe-4afd-b4ae-45652ed0ef6a mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-6be575bf-13c2-450d-b003-ca5aaa86824d mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-7932fd54-5a74-4c25-9e26-61f3df11ab6e mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-7e80faf8-953e-4d8a-9077-3648c9bd4769 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-7ee7bc48-95d8-42bf-9046-b8020fa8173b ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-56d60352-ade9-481e-b6fc-9bda009a98b1 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-708e5bbd-60dd-48ca-98c3-3a7261f26145 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-c77351c8-2aed-43fc-ba31-b5cef8cab732 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-e69c0e0b-d824-4c4b-8f76-19f83c966472 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-f1769d86-2e77-4ae6-9488-bd90ae976b4a ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-fdcfce5b-7fca-4be2-945b-b816376152c3 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-a5d4c820-1cd2-45c6-a4ba-c4ad11c075be ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-bfaeb391-7cbd-4119-9732-dbc6c8833076 STEP: Create DPA CR @ 09/01/25 08:19:11.183 2025/09/01 08:19:11 native-datamover 2025/09/01 08:19:11 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "28fa5da2-70ec-49dc-adb6-9ac27d415d60", "resourceVersion": "96602", "generation": 1, "creationTimestamp": "2025-09-01T08:19:11Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:19:11Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:19:11.202 2025/09/01 08:19:11 Waiting for velero pod to be running 2025/09/01 08:19:11 pod: velero-d48b7f4b-2zp49 is not yet running with status: {Succeeded [{PodReadyToStartContainers False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:10 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:14:31 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:10 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:10 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:14:27 +0000 UTC }] 10.0.99.76 [{10.0.99.76}] 10.128.2.105 [{10.128.2.105}] 2025-09-01 08:14:27 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:14:28 +0000 UTC,FinishedAt:2025-09-01 08:14:28 +0000 UTC,ContainerID:cri-o://758ba1b856bdfc16561445d1413071484a9c3907211dddc6e1dcb8d21184263c,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://758ba1b856bdfc16561445d1413071484a9c3907211dddc6e1dcb8d21184263c 0xc00102baf9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-bzx8p /var/run/secrets/kubernetes.io/serviceaccount true 0xc000926850}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:14:29 +0000 UTC,FinishedAt:2025-09-01 08:14:29 +0000 UTC,ContainerID:cri-o://5d53cd659c5373063bb42c3fb71db808f2e4f6612c5860ba068194f862e5a6b2,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://5d53cd659c5373063bb42c3fb71db808f2e4f6612c5860ba068194f862e5a6b2 0xc00102bcf8 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-bzx8p /var/run/secrets/kubernetes.io/serviceaccount true 0xc000926960}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:14:30 +0000 UTC,FinishedAt:2025-09-01 08:14:30 +0000 UTC,ContainerID:cri-o://1478c78a505b03edee927ca8ec06378b4c815351c513091c97cd1ca0a932362e,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d cri-o://1478c78a505b03edee927ca8ec06378b4c815351c513091c97cd1ca0a932362e 0xc001160129 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-bzx8p /var/run/secrets/kubernetes.io/serviceaccount true 0xc0009269d0}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] [{velero {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:14:31 +0000 UTC,FinishedAt:2025-09-01 08:19:09 +0000 UTC,ContainerID:cri-o://f6300e79edf9529684546ca1229a51c945a55718b3b5fcb7e4d34776a1eb2fba,}} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:9e58447eb6706ee5335fd643bbb3795d92e1fc441a8ae7bf73aabc112e09fc17 cri-o://f6300e79edf9529684546ca1229a51c945a55718b3b5fcb7e4d34776a1eb2fba 0xc0011601b9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000926a40} {kube-api-access-bzx8p /var/run/secrets/kubernetes.io/serviceaccount true 0xc000926a50}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] Burstable [] []} 2025/09/01 08:19:16 pod: velero-d48b7f4b-slckp is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:13 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:11 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:11 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:19:11 +0000 UTC }] 10.0.99.76 [{10.0.99.76}] 10.128.2.113 [{10.128.2.113}] 2025-09-01 08:19:11 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:19:12 +0000 UTC,FinishedAt:2025-09-01 08:19:12 +0000 UTC,ContainerID:cri-o://e7d7e2878cccbcda4a98ff3a0690619bdfbe6f0c91066787663d4885e9f27859,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://e7d7e2878cccbcda4a98ff3a0690619bdfbe6f0c91066787663d4885e9f27859 0xc000d9e809 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-9djvk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000c1cb20}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:19:13 +0000 UTC,FinishedAt:2025-09-01 08:19:13 +0000 UTC,ContainerID:cri-o://c6c450a0ecd092a3705714044004511e3e7aeb084eb73dc005aaa389e594128f,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://c6c450a0ecd092a3705714044004511e3e7aeb084eb73dc005aaa389e594128f 0xc000d9e868 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-9djvk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000c1cb90}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:19:14 +0000 UTC,FinishedAt:2025-09-01 08:19:14 +0000 UTC,ContainerID:cri-o://f1158bc8da0527cc2c1f5fa34cb5e5cbb582b2e92e4238ace595ee1850c2e72c,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d cri-o://f1158bc8da0527cc2c1f5fa34cb5e5cbb582b2e92e4238ace595ee1850c2e72c 0xc000d9e929 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-9djvk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000c1cc00}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc000d9e98e map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000c1cc10} {kube-api-access-9djvk /var/run/secrets/kubernetes.io/serviceaccount true 0xc000c1cc20}] nil []}] Burstable [] []} 2025/09/01 08:19:21 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:19:21.237 2025/09/01 08:19:21 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:19:21 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:19:21 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:19:21 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 09/01/25 08:19:21.392 2025/09/01 08:19:21 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.192409", "end": "2025-09-01 08:22:28.419436", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:22:28.227027", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:22:28 2025-09-01 08:19:22,899 p=29935 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:19:22,900 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:23,148 p=29935 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:19:23,148 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:23,399 p=29935 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:19:23,399 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:23,663 p=29935 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:19:23,664 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:23,678 p=29935 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:19:23,678 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:23,699 p=29935 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:19:23,699 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:23,712 p=29935 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:19:23,713 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:19:24,032 p=29935 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:19:24,032 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:24,061 p=29935 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:19:24,062 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:24,080 p=29935 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:19:24,080 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:24,082 p=29935 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:19:24,646 p=29935 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:19:24,646 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:25,456 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:19:25,457 p=29935 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:19:25,457 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:25,858 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:19:25,859 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:26,148 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:19:26,148 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:26,949 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:19:26,950 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:27,653 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:19:27,653 p=29935 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:19:27,653 p=29935 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:19:28,312 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:19:33,975 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:19:33,975 p=29935 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:19:36,843 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:19:44,436 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:19:49,789 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:19:55,227 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:20:04,641 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:20:10,003 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:20:15,365 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:20:20,728 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:20:26,115 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:20:31,454 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:20:39,238 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:20:44,600 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:20:49,986 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:20:55,346 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:21:00,728 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:21:06,077 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:21:11,429 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:21:16,780 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:21:22,170 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:21:27,525 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:21:32,901 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:21:40,238 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:21:45,593 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:21:50,950 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:21:56,298 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:22:01,665 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:22:07,009 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:22:12,374 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:22:17,724 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:22:23,077 p=29935 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:22:28,439 p=29935 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:22:28,440 p=29935 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.192409", "end": "2025-09-01 08:22:28.419436", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:22:28.227027", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:22:28,440 p=29935 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:22:28,441 p=29935 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/09/01 08:22:28 LAST SEEN TYPE REASON OBJECT MESSAGE 3m Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-56-118.ec2.internal 3m Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-5dc189fa-d1de-489c-802e-790b577a7d4b" 2m59s Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.65/23] from ovn-kubernetes 58s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m58s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 571ms (571ms including waiting). Image size: 307783610 bytes. 55s Normal Created pod/cassandra-0 Created container: cassandra 55s Normal Started pod/cassandra-0 Started container cassandra 2m51s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 388ms (388ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(419d2ac2-c3f8-4f9a-825a-4d6aef67533f) 2m30s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 389ms (389ms including waiting). Image size: 307783610 bytes. 116s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 552ms (552ms including waiting). Image size: 307783610 bytes. 55s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 2.599s (2.599s including waiting). Image size: 307783610 bytes. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-93-94.ec2.internal 2m57s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-accb09fc-f2cf-4725-92b1-5123c249b356" 2m46s Normal AddedInterface pod/cassandra-1 Add eth0 [10.129.2.71/23] from ovn-kubernetes 53s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m46s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 430ms (430ms including waiting). Image size: 307783610 bytes. 52s Normal Created pod/cassandra-1 Created container: cassandra 52s Normal Started pod/cassandra-1 Started container cassandra 2m40s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 408ms (408ms including waiting). Image size: 307783610 bytes. 4s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(37e16ee5-6402-4bfa-9245-5b59bcdfbf72) 2m23s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 370ms (370ms including waiting). Image size: 307783610 bytes. 107s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 451ms (451ms including waiting). Image size: 307783610 bytes. 52s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 563ms (563ms including waiting). Image size: 307783610 bytes. 2m44s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m44s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m44s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-99-76.ec2.internal 2m44s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-a91e8774-f257-42f3-95e8-7974c6ffcb1b" 2m35s Normal AddedInterface pod/cassandra-2 Add eth0 [10.128.2.115/23] from ovn-kubernetes 39s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m34s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 456ms (456ms including waiting). Image size: 307783610 bytes. 38s Normal Created pod/cassandra-2 Created container: cassandra 38s Normal Started pod/cassandra-2 Started container cassandra 2m28s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 472ms (472ms including waiting). Image size: 307783610 bytes. 6s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(f0a13db4-ef06-4e42-b9c7-970f85820a71) 2m6s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 482ms (482ms including waiting). Image size: 307783610 bytes. 92s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 461ms (461ms including waiting). Image size: 307783610 bytes. 39s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 438ms (438ms including waiting). Image size: 307783610 bytes. 3m1s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m1s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m1s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-5dc189fa-d1de-489c-802e-790b577a7d4b 2m57s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m57s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 2m57s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-accb09fc-f2cf-4725-92b1-5123c249b356 2m45s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m45s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m44s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-a91e8774-f257-42f3-95e8-7974c6ffcb1b 3m1s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m1s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m57s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m57s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m45s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m45s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:22:28.6 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:22:28.6 (3m18.848s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:22:28.6 2025/09/01 08:22:28 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:22:28.6 2025/09/01 08:22:28 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:22:28.6 STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:22:28.6 STEP: Run must-gather because the spec failed @ 09/01/25 08:22:28.6 2025/09/01 08:22:28 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:22:28 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:23:17 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:23:17 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:23:17.179 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:23:17.179 (48.579s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:17.179 2025/09/01 08:23:17 Cleaning app 2025/09/01 08:23:17 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:23:41 2025-09-01 08:23:18,707 p=31331 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:23:18,707 p=31331 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:18,975 p=31331 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:23:18,975 p=31331 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:19,230 p=31331 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:23:19,231 p=31331 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:19,496 p=31331 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:23:19,496 p=31331 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:19,512 p=31331 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:23:19,512 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:19,529 p=31331 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:23:19,529 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:19,541 p=31331 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:23:19,541 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:23:19,858 p=31331 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:23:19,859 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:19,887 p=31331 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:23:19,887 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:19,907 p=31331 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:23:19,907 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:19,909 p=31331 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:23:20,489 p=31331 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:23:20,489 p=31331 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:41,379 p=31331 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-09-01 08:23:41,379 p=31331 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:23:41,379 p=31331 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:41,756 p=31331 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:23:41,756 p=31331 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:41.809 (24.63s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:41.809 2025/09/01 08:23:41 Cleaning setup resources for the backup 2025/09/01 08:23:41 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:23:41 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:23:41 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:41.862 (53ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:41.862 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:23:41.87 (8ms) Attempt #2 Failed. Retrying ↺ @ 09/01/25 08:23:41.87 > Enter [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:23:41.87 < Exit [BeforeEach] [datamover] DataMover: Backup/Restore stateful application with CSI @ 09/01/25 08:23:41.882 (12ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:23:41.883 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:23:41.883 (0s) > Enter [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:23:41.883 2025/09/01 08:23:41 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-07a2b3f4-3d06-4257-ab61-1e5884082ee4 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-1d9d6f8f-13f0-463e-8e85-5d1f4a359d57 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-2e2dabc9-6f25-4cee-9851-da9776e2bf1e mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-553e5564-f389-4594-aeb4-6d5ac8e043c3 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-785d8519-c851-48bc-8ca1-0d80ed4f863b mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-ace3121a-aeb5-46a8-a97f-558a7f1879af mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-da956331-d57c-4c37-a905-d09471c24e0f mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-dd96c949-afa0-4555-9c73-b4974693494c mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-fb129fa5-2526-4aae-9203-46d06ed2d229 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-96c502a2-c0b0-44c2-96f3-e5fc13bd7e7f ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-dcba12ed-c3eb-41f7-8f46-530f8925caa6 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-49d8e047-8919-4270-b808-056fd180f9b2 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-e483f099-e3c6-46c3-837e-8ea83a40f9ba ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-4fb3a554-09f8-4c26-816c-e41562dff530 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-6b49743a-9186-4e36-83cc-b32363ec29aa ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-135e7356-c8fe-4f05-b3a9-24334255ec84 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-36cee8f0-02eb-42e7-9ae1-8453304874fe STEP: Create DPA CR @ 09/01/25 08:23:43.304 2025/09/01 08:23:43 native-datamover 2025/09/01 08:23:43 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "751bd6f7-a4b5-483c-8584-fd70ffc4dae5", "resourceVersion": "101412", "generation": 1, "creationTimestamp": "2025-09-01T08:23:43Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:23:43Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:23:43.326 2025/09/01 08:23:43 Waiting for velero pod to be running 2025/09/01 08:23:48 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:23:48.341 2025/09/01 08:23:48 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:23:48 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:23:48 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:23:48 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-e2e @ 09/01/25 08:23:48.475 2025/09/01 08:23:48 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.195683", "end": "2025-09-01 08:26:54.306434", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:26:54.110751", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:26:54 2025-09-01 08:23:49,982 p=31560 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:23:49,983 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:50,234 p=31560 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:23:50,235 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:50,486 p=31560 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:23:50,486 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:50,744 p=31560 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:23:50,745 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:50,761 p=31560 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:23:50,761 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:50,779 p=31560 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:23:50,779 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:50,790 p=31560 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:23:50,791 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:23:51,101 p=31560 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:23:51,101 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:51,129 p=31560 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:23:51,129 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:51,146 p=31560 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:23:51,146 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:51,148 p=31560 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:23:51,707 p=31560 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:23:51,708 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:52,524 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:23:52,524 p=31560 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:23:52,524 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:23:52,899 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:23:52,899 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:53,193 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:23:53,194 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:53,994 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:23:53,994 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:54,666 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:23:54,666 p=31560 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:23:54,666 p=31560 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:23:55,328 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:24:00,961 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:24:00,962 p=31560 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:24:03,521 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:24:10,714 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:24:16,065 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:24:21,440 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:24:30,115 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:24:35,472 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:24:40,821 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:24:46,176 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:24:51,536 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:24:56,891 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:25:04,816 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:25:10,168 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:25:15,534 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:25:20,916 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:25:26,271 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:25:31,646 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:25:37,005 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:25:42,369 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:25:47,756 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:25:53,107 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:25:58,462 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:26:06,115 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:26:11,471 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:26:16,836 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:26:22,178 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:26:27,546 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:26:32,903 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:26:38,247 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:26:43,620 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:26:48,964 p=31560 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:26:54,327 p=31560 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:26:54,327 p=31560 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-440 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.195683", "end": "2025-09-01 08:26:54.306434", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:26:54.110751", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:26:54,328 p=31560 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:26:54,328 p=31560 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-440 2025/09/01 08:26:54 LAST SEEN TYPE REASON OBJECT MESSAGE 2m59s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m59s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m59s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-440/cassandra-0 to ip-10-0-93-94.ec2.internal 2m59s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-a9c9ac81-2514-4979-9421-606036dc88a2" 2m57s Normal AddedInterface pod/cassandra-0 Add eth0 [10.129.2.75/23] from ovn-kubernetes 55s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m57s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 482ms (482ms including waiting). Image size: 307783610 bytes. 54s Normal Created pod/cassandra-0 Created container: cassandra 54s Normal Started pod/cassandra-0 Started container cassandra 2m50s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 437ms (437ms including waiting). Image size: 307783610 bytes. 8s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-440(36fdcce4-3f7c-42f3-96ec-c392debae429) 2m31s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 312ms (312ms including waiting). Image size: 307783610 bytes. 116s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 413ms (413ms including waiting). Image size: 307783610 bytes. 55s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 347ms (347ms including waiting). Image size: 307783610 bytes. 2m56s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m55s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-440/cassandra-1 to ip-10-0-56-118.ec2.internal 2m55s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-5df360eb-cf44-464b-bf0b-8677fea1cdca" 2m49s Normal AddedInterface pod/cassandra-1 Add eth0 [10.131.0.67/23] from ovn-kubernetes 66s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m49s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 504ms (504ms including waiting). Image size: 307783610 bytes. 65s Normal Created pod/cassandra-1 Created container: cassandra 65s Normal Started pod/cassandra-1 Started container cassandra 2m41s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 551ms (552ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-440(3b9cd32c-5c8c-4ca5-be85-84f420b319ee) 2m24s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 429ms (429ms including waiting). Image size: 307783610 bytes. 115s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 383ms (383ms including waiting). Image size: 307783610 bytes. 66s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 435ms (435ms including waiting). Image size: 307783610 bytes. 2m48s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m47s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m47s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-440/cassandra-2 to ip-10-0-99-76.ec2.internal 2m47s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-03c9a412-3a34-43b2-8002-f421f98d3a64" 2m46s Normal AddedInterface pod/cassandra-2 Add eth0 [10.128.2.121/23] from ovn-kubernetes 51s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m45s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 653ms (653ms including waiting). Image size: 307783610 bytes. 50s Normal Created pod/cassandra-2 Created container: cassandra 50s Normal Started pod/cassandra-2 Started container cassandra 2m39s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 353ms (353ms including waiting). Image size: 307783610 bytes. 4s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-440(6bf6087f-bd32-4e57-9799-f913500bf15f) 2m20s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 375ms (375ms including waiting). Image size: 307783610 bytes. 107s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 639ms (639ms including waiting). Image size: 307783610 bytes. 51s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 506ms (506ms including waiting). Image size: 307783610 bytes. 3m Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-0" 3m Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-a9c9ac81-2514-4979-9421-606036dc88a2 2m56s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m56s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-1" 2m56s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-5df360eb-cf44-464b-bf0b-8677fea1cdca 2m48s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m48s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-440/cassandra-data-cassandra-2" 2m48s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-03c9a412-3a34-43b2-8002-f421f98d3a64 3m Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m56s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m56s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m48s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m48s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:26:54.491 < Exit [It] [tc-id:OADP-440][interop] Cassandra application @ 09/01/25 08:26:54.491 (3m12.608s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:26:54.491 2025/09/01 08:26:54 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:26:54.491 2025/09/01 08:26:54 The failed spec name is: [datamover] DataMover: Backup/Restore stateful application with CSI [tc-id:OADP-440][interop] Cassandra application STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:26:54.491 STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:26:54.491 STEP: Run must-gather because the spec failed @ 09/01/25 08:26:54.491 2025/09/01 08:26:54 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:26:54 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:27:43 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:27:43 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:27:43.065 The folder logs/It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:27:43.065 (48.574s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:27:43.065 2025/09/01 08:27:43 Cleaning app 2025/09/01 08:27:43 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:28:12 2025-09-01 08:27:44,595 p=32956 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:27:44,595 p=32956 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:27:44,853 p=32956 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:27:44,854 p=32956 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:27:45,126 p=32956 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:27:45,126 p=32956 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:27:45,379 p=32956 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:27:45,379 p=32956 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:27:45,394 p=32956 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:27:45,395 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:27:45,413 p=32956 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:27:45,413 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:27:45,424 p=32956 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:27:45,424 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:27:45,746 p=32956 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:27:45,746 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:27:45,772 p=32956 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:27:45,772 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:27:45,791 p=32956 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:27:45,791 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:27:45,793 p=32956 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:27:46,364 p=32956 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:27:46,364 p=32956 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:12,219 p=32956 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-440] *** 2025-09-01 08:28:12,220 p=32956 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:28:12,220 p=32956 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:12,560 p=32956 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:28:12,560 p=32956 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:28:12.61 (29.544s) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:28:12.61 2025/09/01 08:28:12 Cleaning setup resources for the backup 2025/09/01 08:28:12 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:28:12 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:28:12 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:28:12.641 (32ms) > Enter [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:28:12.641 < Exit [DeferCleanup (Each)] TOP-LEVEL @ 09/01/25 08:28:12.649 (8ms) • [FAILED] [825.441 seconds] [datamover] DataMover: Backup/Restore stateful application with CSI  [It] [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/e2e/app_backup/backup_restore_datamover.go:50 [FAILED] Unexpected error: <*errors.Error | 0xc000cfa040>: Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"admin_kubeconfig":"/home/jenkins/.kube/config","namespace":"test-oadp-440","non_admin_user":false,"use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra","user_kubeconfig":"/home/jenkins/.kube/config","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"admin_kubeconfig\":\"/home/jenkins/.kube/config\",\"namespace\":\"test-oadp-440\",\"non_admin_user\":false,\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra\",\"user_kubeconfig\":\"/home/jenkins/.kube/config\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:26:54.491 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS > Enter [ReportAfterEach] [upstream-velero] Credentials suite @ 09/01/25 08:28:12.65 < Exit [ReportAfterEach] [upstream-velero] Credentials suite @ 09/01/25 08:28:12.65 (0s) SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 > Enter [BeforeEach] Backup hooks tests @ 09/01/25 08:28:12.65 < Exit [BeforeEach] Backup hooks tests @ 09/01/25 08:28:12.659 (9ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:28:12.659 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:28:12.659 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:28:12.659 2025/09/01 08:28:12 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-1003f225-951b-460c-bcf6-b84177c321ff mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-1094f87b-30fd-435a-ada0-6063bd1ad57f mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-4022bbb7-347c-45f6-89d7-380aeb7287a9 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-54eb44a6-f5b2-4e26-8cfd-ce2f7debd988 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-6f7532f5-6b19-4fa9-85ae-3c27ebd9ce27 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-aa930134-93ce-4d7e-b2e4-c244732fd221 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-ccf36616-385c-42a3-ad23-e856d0ece77d mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-d5efff87-3be5-4a7d-8fcb-528b1be99abc mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-fbc0e1ba-ca6e-41e2-8a0d-571bdb2b4c16 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-ebdae601-b6e9-44b2-837c-ec664c1f47f4 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-eff77cbc-7d98-426d-89fb-57758ce10c88 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-16e0a9cc-6c5a-493d-9114-b1d0fc84035b ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-ab9fd19f-160e-4a20-a35a-d7357c9654e1 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-1f6c198a-5dde-476a-ad5e-f6587ea35b78 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-ef4769ae-c61d-44f8-a2ef-96c2b8ad7f65 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-05b21c9f-7c21-4090-9e9e-f0da00462c64 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-1a5e823a-f798-4f3c-aa14-a0218444aec7 STEP: Create DPA CR @ 09/01/25 08:28:14.074 2025/09/01 08:28:14 restic 2025/09/01 08:28:14 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "af0d3233-9c75-496b-ad92-41d284549727", "resourceVersion": "106125", "generation": 1, "creationTimestamp": "2025-09-01T08:28:14Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:28:14Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:28:14.095 2025/09/01 08:28:14 Waiting for velero pod to be running 2025/09/01 08:28:19 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:28:19.113 2025/09/01 08:28:19 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 09/01/25 08:28:19.21 2025/09/01 08:28:19 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). FAILED - RETRYING: [localhost]: Check pods status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.203533", "end": "2025-09-01 08:31:29.844616", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:31:29.641083", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:31:29 2025-09-01 08:28:20,692 p=33186 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:28:20,692 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:20,950 p=33186 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:28:20,950 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:21,201 p=33186 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:28:21,201 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:21,454 p=33186 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:28:21,454 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:21,468 p=33186 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:28:21,469 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:21,486 p=33186 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:28:21,486 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:21,499 p=33186 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:28:21,499 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:28:21,801 p=33186 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:28:21,802 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:21,829 p=33186 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:28:21,829 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:21,846 p=33186 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:28:21,846 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:21,847 p=33186 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:28:22,409 p=33186 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:28:22,409 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:23,228 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:28:23,229 p=33186 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:28:23,229 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:23,579 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:28:23,579 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:23,890 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:28:23,890 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:24,704 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:28:24,705 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:25,408 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:28:25,409 p=33186 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:28:25,409 p=33186 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:28:26,092 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:28:31,717 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (29 retries left). 2025-09-01 08:28:37,367 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:28:37,367 p=33186 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:28:38,841 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:28:46,537 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:28:51,890 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:28:57,244 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:29:02,594 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:29:09,235 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:29:14,590 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:29:19,946 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:29:25,301 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:29:30,708 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:29:36,772 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:29:43,537 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:29:48,894 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:29:54,314 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:29:59,685 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:30:05,068 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:30:10,427 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:30:15,776 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:30:21,187 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:30:26,563 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:30:36,040 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:30:41,420 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:30:46,784 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:30:52,136 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:30:57,492 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:31:02,847 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:31:08,245 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:31:13,637 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:31:19,057 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:31:24,461 p=33186 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:31:29,869 p=33186 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:31:29,870 p=33186 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.203533", "end": "2025-09-01 08:31:29.844616", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:31:29.641083", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:31:29,871 p=33186 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:31:29,871 p=33186 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/09/01 08:31:30 LAST SEEN TYPE REASON OBJECT MESSAGE 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-56-118.ec2.internal 3m4s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-98e15b9e-265f-43d0-b42c-3f394761b71c" 2m59s Normal AddedInterface pod/cassandra-0 Add eth0 [10.131.0.69/23] from ovn-kubernetes 61s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 568ms (568ms including waiting). Image size: 307783610 bytes. 60s Normal Created pod/cassandra-0 Created container: cassandra 60s Normal Started pod/cassandra-0 Started container cassandra 2m50s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 348ms (348ms including waiting). Image size: 307783610 bytes. 2s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(44d8f329-4bab-4cec-9ae0-79e85bf3b6a3) 2m31s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 463ms (463ms including waiting). Image size: 307783610 bytes. 114s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 530ms (530ms including waiting). Image size: 307783610 bytes. 61s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 429ms (429ms including waiting). Image size: 307783610 bytes. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m56s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m56s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-99-76.ec2.internal 2m57s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-472b8b84-1279-4d52-aeee-68ab5dd5916c" 2m56s Normal AddedInterface pod/cassandra-1 Add eth0 [10.128.2.125/23] from ovn-kubernetes 54s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m55s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 480ms (480ms including waiting). Image size: 307783610 bytes. 53s Normal Created pod/cassandra-1 Created container: cassandra 53s Normal Started pod/cassandra-1 Started container cassandra 2m49s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 382ms (382ms including waiting). Image size: 307783610 bytes. 1s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(ef664372-6dd5-4aa2-9759-7089b7c7aac4) 2m28s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 394ms (394ms including waiting). Image size: 307783610 bytes. 112s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 363ms (363ms including waiting). Image size: 307783610 bytes. 54s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 544ms (544ms including waiting). Image size: 307783610 bytes. 2m54s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m53s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m53s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-93-94.ec2.internal 2m54s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-7c580fa3-5ee2-4e84-bd6e-9810cfd3f7f2" 2m47s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.78/23] from ovn-kubernetes 51s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m47s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 366ms (366ms including waiting). Image size: 307783610 bytes. 50s Normal Created pod/cassandra-2 Created container: cassandra 50s Normal Started pod/cassandra-2 Started container cassandra 2m24s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 372ms (372ms including waiting). Image size: 307783610 bytes. 9s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(dcaee27e-d972-4424-b2f0-3e3c12bef8fe) 113s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 442ms (442ms including waiting). Image size: 307783610 bytes. 51s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 428ms (428ms including waiting). Image size: 307783610 bytes. 3m5s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m5s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m5s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-98e15b9e-265f-43d0-b42c-3f394761b71c 2m58s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 2m58s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m57s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-472b8b84-1279-4d52-aeee-68ab5dd5916c 2m54s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m54s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m54s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-7c580fa3-5ee2-4e84-bd6e-9810cfd3f7f2 3m5s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m5s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m58s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m58s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m54s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m54s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:31:30.054 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:31:30.054 (3m17.395s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:31:30.054 2025/09/01 08:31:30 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:31:30.054 2025/09/01 08:31:30 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:31:30.054 STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:31:30.054 2025/09/01 08:31:30 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic does not exists, creating new folder with the name: logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic STEP: Run must-gather because the spec failed @ 09/01/25 08:31:30.054 2025/09/01 08:31:30 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:31:30 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:32:19 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:32:19 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/09/01 08:32:19 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:32:19.128 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:32:19.128 (49.074s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:19.128 2025/09/01 08:32:19 Cleaning app 2025/09/01 08:32:19 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:32:49 2025-09-01 08:32:21,048 p=34583 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:32:21,048 p=34583 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:32:21,414 p=34583 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:32:21,415 p=34583 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:32:21,766 p=34583 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:32:21,766 p=34583 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:32:22,122 p=34583 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:32:22,122 p=34583 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:32:22,141 p=34583 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:32:22,141 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:22,166 p=34583 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:32:22,167 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:22,183 p=34583 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:32:22,184 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:32:22,565 p=34583 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:32:22,566 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:22,602 p=34583 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:32:22,603 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:22,627 p=34583 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:32:22,627 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:22,630 p=34583 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:32:23,312 p=34583 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:32:23,312 p=34583 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:32:49,373 p=34583 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-09-01 08:32:49,374 p=34583 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:32:49,374 p=34583 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:32:49,710 p=34583 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:32:49,710 p=34583 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:49.757 (30.629s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:49.758 2025/09/01 08:32:49 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:49.758 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:49.758 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:32:49.781 (23ms) Attempt #1 Failed. Retrying ↺ @ 09/01/25 08:32:49.781 > Enter [BeforeEach] Backup hooks tests @ 09/01/25 08:32:49.781 < Exit [BeforeEach] Backup hooks tests @ 09/01/25 08:32:49.789 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:32:49.789 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:32:49.789 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:32:49.789 2025/09/01 08:32:49 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-03c0eecd-3b01-42eb-a5ac-fa698468c0cf mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-18d9e616-10c7-4ad9-afd0-ab5f8ab78067 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-1c47d052-fbdb-44f5-8292-29af0f69b863 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-79df7681-3d8a-41bf-b261-a4049b4b8cb0 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-8b6ed80e-e079-4ebf-ab0f-c5805bf8845c mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-aab3e73e-f86a-4705-adac-def17cb1aa26 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-c0748f60-d3ff-44b3-aeec-28f91ad62954 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-e08fafe5-6eb2-4aa0-9d0c-893b3649256a mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-eb22aa33-ab7e-43d1-a4e0-3c7bc63c11d9 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-51052ace-53aa-4530-a83c-3eb1eb8b0ec5 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-a8798387-456d-49d5-bf20-a2d7c2ece456 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-226c0805-df53-4157-ad00-a9c7bcef1636 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-cddde73a-a967-4236-a4fa-03cbf8010547 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-ab797560-cb3f-498d-8831-f4aaa0653183 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-e333826a-1c62-48f0-a6f7-b817ff99d21b ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-09edf003-ccf7-44b2-b813-cc6f9a10f2dc ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-bf4579a3-b365-4308-9aab-f0c8f068b0d7 STEP: Create DPA CR @ 09/01/25 08:32:51.206 2025/09/01 08:32:51 restic 2025/09/01 08:32:51 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "bf541549-6738-46ce-84e2-9538ed859a36", "resourceVersion": "111011", "generation": 1, "creationTimestamp": "2025-09-01T08:32:51Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:32:51Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:32:51.232 2025/09/01 08:32:51 Waiting for velero pod to be running 2025/09/01 08:32:51 pod: velero-5d49bc6f8d-r6hgf is not yet running with status: {Succeeded [{PodReadyToStartContainers False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:50 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:28:17 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:50 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:50 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:28:14 +0000 UTC }] 10.0.93.94 [{10.0.93.94}] 10.129.2.76 [{10.129.2.76}] 2025-09-01 08:28:14 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:28:15 +0000 UTC,FinishedAt:2025-09-01 08:28:15 +0000 UTC,ContainerID:cri-o://164ebe1bf7e7fbdea3307e6bc4c28013123284d32fdc2e0f7cd2fa47f330669f,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://164ebe1bf7e7fbdea3307e6bc4c28013123284d32fdc2e0f7cd2fa47f330669f 0xc00102b549 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-4qvmz /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dcac30}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:28:15 +0000 UTC,FinishedAt:2025-09-01 08:28:15 +0000 UTC,ContainerID:cri-o://065e8e2edf3cba5c9b7f9ed2219f6b04540cf86c1b2c151de55c33bae6d40520,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://065e8e2edf3cba5c9b7f9ed2219f6b04540cf86c1b2c151de55c33bae6d40520 0xc00102b6c8 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-4qvmz /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dcaca0}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:28:16 +0000 UTC,FinishedAt:2025-09-01 08:28:16 +0000 UTC,ContainerID:cri-o://18a3fb69d11d5d0a79fc748bc6b331ccd23763f74fbb8e93ebd7dff642567305,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d cri-o://18a3fb69d11d5d0a79fc748bc6b331ccd23763f74fbb8e93ebd7dff642567305 0xc00102bcc9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-4qvmz /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dcad10}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] [{velero {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:28:17 +0000 UTC,FinishedAt:2025-09-01 08:32:49 +0000 UTC,ContainerID:cri-o://65ebdc3e27e14a8faa990cccffaba646f5c3c2acf232490b3e8fc02d72dfbd12,}} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:9e58447eb6706ee5335fd643bbb3795d92e1fc441a8ae7bf73aabc112e09fc17 cri-o://65ebdc3e27e14a8faa990cccffaba646f5c3c2acf232490b3e8fc02d72dfbd12 0xc00102bfa9 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000dcad80} {kube-api-access-4qvmz /var/run/secrets/kubernetes.io/serviceaccount true 0xc000dcad90}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] Burstable [] []} 2025/09/01 08:32:56 pod: velero-5d49bc6f8d-fsv66 is not yet running with status: {Pending [{PodReadyToStartContainers True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:52 +0000 UTC } {Initialized True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:51 +0000 UTC ContainersNotReady containers with unready status: [velero]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:51 +0000 UTC ContainersNotReady containers with unready status: [velero]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:32:51 +0000 UTC }] 10.0.93.94 [{10.0.93.94}] 10.129.2.79 [{10.129.2.79}] 2025-09-01 08:32:51 +0000 UTC [{openshift-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:32:52 +0000 UTC,FinishedAt:2025-09-01 08:32:52 +0000 UTC,ContainerID:cri-o://d03809bdbac3e8d426f2b7caea498d7b3643dc281b5f91c9a1b4a8e2a4ccf7ff,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd registry.redhat.io/oadp/oadp-velero-plugin-rhel9@sha256:fa2ff74cca6b6028b0232cd3d70f0de45da37283dc8048a01c4da8061585a5bd cri-o://d03809bdbac3e8d426f2b7caea498d7b3643dc281b5f91c9a1b4a8e2a4ccf7ff 0xc0008c7d39 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-lh8qr /var/run/secrets/kubernetes.io/serviceaccount true 0xc000cabc30}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {velero-plugin-for-aws {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:32:53 +0000 UTC,FinishedAt:2025-09-01 08:32:53 +0000 UTC,ContainerID:cri-o://17ac57fcf6ba124a14d7b407cb3ac84712eb0f8b9489171b9cf01e17b7ac207a,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 registry.redhat.io/oadp/oadp-velero-plugin-for-aws-rhel9@sha256:288a948e4725241af822abc4a0bb112670548c8a4e60c95a1f4f33aa46d552e9 cri-o://17ac57fcf6ba124a14d7b407cb3ac84712eb0f8b9489171b9cf01e17b7ac207a 0xc0008c7d98 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-lh8qr /var/run/secrets/kubernetes.io/serviceaccount true 0xc000cabca0}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []} {kubevirt-velero-plugin {nil nil &ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-09-01 08:32:54 +0000 UTC,FinishedAt:2025-09-01 08:32:54 +0000 UTC,ContainerID:cri-o://da3ea12552eec19df9d04211db7be188a95aea6c22ba23afd02017b474dd8e56,}} {nil nil nil} true 0 registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d registry.redhat.io/oadp/oadp-kubevirt-velero-plugin-rhel9@sha256:2b63055e8e681f8d20194d9aa00f667ac4e38cb1247442287b8cc273f05b587d cri-o://da3ea12552eec19df9d04211db7be188a95aea6c22ba23afd02017b474dd8e56 0xc0008c7e49 map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{134217728 0} {} BinarySI}] &ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},} [{plugins /target false } {kube-api-access-lh8qr /var/run/secrets/kubernetes.io/serviceaccount true 0xc000cabd10}] &ContainerUser{Linux:&LinuxContainerUser{UID:1000740000,GID:0,SupplementalGroups:[0 1000740000],},} []}] [{velero {&ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 registry.redhat.io/oadp/oadp-velero-rhel9@sha256:e22092c4769ece2dd36b99cb84fcbe6da99d6c0e175fca38f00f436de0ba7a62 0xc0008c7eae map[] nil [{plugins /plugins false } {scratch /scratch false } {certs /etc/ssl/certs false } {bound-sa-token /var/run/secrets/openshift/serviceaccount true 0xc000cabd20} {kube-api-access-lh8qr /var/run/secrets/kubernetes.io/serviceaccount true 0xc000cabd30}] nil []}] Burstable [] []} 2025/09/01 08:33:01 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:33:01.264 2025/09/01 08:33:01 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 09/01/25 08:33:01.361 2025/09/01 08:33:01 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.176690", "end": "2025-09-01 08:36:07.819263", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:36:07.642573", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:36:07 2025-09-01 08:33:02,839 p=34791 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:33:02,839 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:03,090 p=34791 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:33:03,090 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:03,345 p=34791 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:33:03,345 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:03,594 p=34791 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:33:03,594 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:03,608 p=34791 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:33:03,608 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:03,627 p=34791 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:33:03,627 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:03,638 p=34791 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:33:03,639 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:33:03,943 p=34791 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:33:03,944 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:03,970 p=34791 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:33:03,970 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:03,988 p=34791 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:33:03,988 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:03,990 p=34791 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:33:04,554 p=34791 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:33:04,555 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:05,380 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:33:05,381 p=34791 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:33:05,381 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:05,752 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:33:05,753 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:06,049 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:33:06,050 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:06,852 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:33:06,852 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:07,527 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:33:07,527 p=34791 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:33:07,527 p=34791 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:33:08,193 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:33:13,835 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:33:13,835 p=34791 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:33:16,041 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:33:23,540 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:33:28,896 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:33:34,238 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:33:43,642 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:33:48,994 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:33:54,360 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:33:59,703 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:34:05,078 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:34:14,139 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:34:19,485 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:34:24,878 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:34:30,256 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:34:35,604 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:34:41,006 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:34:46,345 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:34:51,688 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:34:57,042 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:35:02,416 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:35:07,765 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:35:14,338 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:35:19,688 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:35:25,056 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:35:30,413 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:35:35,759 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:35:41,105 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:35:46,454 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:35:51,810 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:35:57,148 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:36:02,498 p=34791 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:36:07,839 p=34791 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:36:07,839 p=34791 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.176690", "end": "2025-09-01 08:36:07.819263", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:36:07.642573", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:36:07,840 p=34791 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:36:07,840 p=34791 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/09/01 08:36:07 LAST SEEN TYPE REASON OBJECT MESSAGE 7m42s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-99-76.ec2.internal 2m59s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-84257cf2-62c3-400b-8b94-866612886ed8" 2m58s Normal AddedInterface pod/cassandra-0 Add eth0 [10.128.2.135/23] from ovn-kubernetes 60s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 2m58s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 495ms (495ms including waiting). Image size: 307783610 bytes. 60s Normal Created pod/cassandra-0 Created container: cassandra 59s Normal Started pod/cassandra-0 Started container cassandra 2m50s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 477ms (477ms including waiting). Image size: 307783610 bytes. 2s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(34f56b74-0a0c-4a84-b3e8-a1a3884f2f3b) 2m29s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 355ms (355ms including waiting). Image size: 307783610 bytes. 118s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 494ms (494ms including waiting). Image size: 307783610 bytes. 60s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 418ms (418ms including waiting). Image size: 307783610 bytes. 7m34s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m57s Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-56-118.ec2.internal 2m56s Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-6e56c22b-6680-4629-85fd-510152149f80" 2m50s Normal AddedInterface pod/cassandra-1 Add eth0 [10.131.0.71/23] from ovn-kubernetes 63s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m50s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 482ms (482ms including waiting). Image size: 307783610 bytes. 62s Normal Created pod/cassandra-1 Created container: cassandra 62s Normal Started pod/cassandra-1 Started container cassandra 2m41s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 397ms (397ms including waiting). Image size: 307783610 bytes. 0s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(777aed86-0e65-4440-8e9a-4e3db5195bab) 2m23s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 413ms (413ms including waiting). Image size: 307783610 bytes. 109s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 595ms (595ms including waiting). Image size: 307783610 bytes. 63s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 409ms (409ms including waiting). Image size: 307783610 bytes. 7m31s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m49s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m49s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m49s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-93-94.ec2.internal 2m48s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-06bde18b-5e91-4218-9f86-ea5b650ca377" 2m39s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.81/23] from ovn-kubernetes 46s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m38s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 398ms (398ms including waiting). Image size: 307783610 bytes. 45s Normal Created pod/cassandra-2 Created container: cassandra 45s Normal Started pod/cassandra-2 Started container cassandra 2m33s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 456ms (456ms including waiting). Image size: 307783610 bytes. 12s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(a733eed6-2bda-4606-8dae-3b5d51899720) 2m13s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 626ms (626ms including waiting). Image size: 307783610 bytes. 100s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 465ms (465ms including waiting). Image size: 307783610 bytes. 45s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 501ms (501ms including waiting). Image size: 307783610 bytes. 3m Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-84257cf2-62c3-400b-8b94-866612886ed8 2m57s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m57s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 2m57s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-6e56c22b-6680-4629-85fd-510152149f80 2m49s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m49s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m49s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-06bde18b-5e91-4218-9f86-ea5b650ca377 3m Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 2m57s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 2m57s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m49s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m49s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:36:07.993 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:36:07.993 (3m18.204s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:36:07.993 2025/09/01 08:36:07 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:36:07.993 2025/09/01 08:36:07 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:36:07.993 STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:36:07.993 STEP: Run must-gather because the spec failed @ 09/01/25 08:36:07.993 2025/09/01 08:36:07 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:36:07 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:36:56 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:36:56 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/09/01 08:36:56 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:36:56.757 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:36:56.757 (48.764s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:36:56.757 2025/09/01 08:36:56 Cleaning app 2025/09/01 08:36:56 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:37:27 2025-09-01 08:36:58,732 p=36184 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:36:58,732 p=36184 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:36:59,071 p=36184 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:36:59,071 p=36184 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:36:59,424 p=36184 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:36:59,424 p=36184 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:36:59,792 p=36184 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:36:59,792 p=36184 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:36:59,813 p=36184 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:36:59,813 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:36:59,841 p=36184 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:36:59,842 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:36:59,860 p=36184 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:36:59,861 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:37:00,261 p=36184 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:37:00,262 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:00,296 p=36184 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:37:00,297 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:00,321 p=36184 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:37:00,322 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:00,325 p=36184 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:37:01,030 p=36184 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:37:01,031 p=36184 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:27,158 p=36184 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-09-01 08:37:27,158 p=36184 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:37:27,158 p=36184 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:27,641 p=36184 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:37:27,642 p=36184 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:37:27.732 (30.975s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:37:27.733 2025/09/01 08:37:27 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:37:27.733 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:37:27.733 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:37:27.753 (20ms) Attempt #2 Failed. Retrying ↺ @ 09/01/25 08:37:27.753 > Enter [BeforeEach] Backup hooks tests @ 09/01/25 08:37:27.753 < Exit [BeforeEach] Backup hooks tests @ 09/01/25 08:37:27.764 (11ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:37:27.764 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:37:27.764 (0s) > Enter [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:37:27.764 2025/09/01 08:37:27 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-1cb29ea9-c8a7-4d48-ab15-33b89d954080 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-46f74402-2b3e-4816-83f7-9aa2f05e865e mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-4a7afffe-db62-4b42-93b8-f7be524d1b50 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-58f77b6f-a6e1-490a-8ab3-6e24f51895b2 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-692a0b02-cefc-404b-bf37-4e132ea39dae mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-7692f7d2-ed98-4ea3-8ad4-9d15256de067 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-d23dd12a-de76-40c9-9498-79a00979a958 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-ea088c7a-85e0-49c9-9084-72fcff540fd6 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-f77d89d7-d337-4d4e-97be-2c5f19f848d5 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-242127e3-e8a7-4374-9599-80c22d20b232 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-61cfc817-2b63-4396-894e-8620235946c0 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-70a5e8da-2e99-4ab5-82bf-08167e37b100 ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-7a589a54-19c1-4c23-941c-de4d0e4b6066 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-7dab868b-8337-45e1-a4c5-2320fe317378 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-fcb0edb7-2e07-474c-8d08-d6c148ddafb2 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-1f82ec3a-1523-4c40-bf8d-fde9b3f49449 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-df760e0b-e673-4c7c-90d8-98b40b2a51f0 STEP: Create DPA CR @ 09/01/25 08:37:29.202 2025/09/01 08:37:29 restic 2025/09/01 08:37:29 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "dcad45fb-0666-4f87-977c-7d24e8502681", "resourceVersion": "115857", "generation": 1, "creationTimestamp": "2025-09-01T08:37:29Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:37:29Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:37:29.222 2025/09/01 08:37:29 Waiting for velero pod to be running 2025/09/01 08:37:34 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:37:34.242 2025/09/01 08:37:34 Checking for correct number of running NodeAgent pods... STEP: Installing application for case cassandra-hooks-e2e @ 09/01/25 08:37:34.339 2025/09/01 08:37:34 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** changed: [localhost] [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pods status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.173165", "end": "2025-09-01 08:40:46.113616", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:40:45.940451", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************* localhost : ok=21  changed=8  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025/09/01 08:40:46 2025-09-01 08:37:36,281 p=36392 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:37:36,281 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:36,634 p=36392 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:37:36,635 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:36,992 p=36392 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:37:36,992 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:37,350 p=36392 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:37:37,350 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:37,368 p=36392 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:37:37,368 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:37,393 p=36392 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:37:37,393 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:37,411 p=36392 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:37:37,412 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:37:37,796 p=36392 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:37:37,796 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:37,832 p=36392 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:37:37,833 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:37,858 p=36392 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:37:37,858 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:37,861 p=36392 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:37:38,542 p=36392 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:37:38,542 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:39,657 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check namespace] *** 2025-09-01 08:37:39,658 p=36392 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:37:39,658 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:40,108 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create namespace] *** 2025-09-01 08:37:40,108 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:40,397 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Add scc privileged to service account] *** 2025-09-01 08:37:40,398 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:41,207 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a service object required to provide network identity] *** 2025-09-01 08:37:41,208 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:41,896 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Create a statefulset with the existing yaml] *** 2025-09-01 08:37:41,896 p=36392 u=1002790000 n=ansible WARNING| [WARNING]: unknown field "spec.volumeClaimTemplates[0].labels" 2025-09-01 08:37:41,896 p=36392 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:37:42,580 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pods status (30 retries left). 2025-09-01 08:37:48,449 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Check pods status] *** 2025-09-01 08:37:48,449 p=36392 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:37:51,151 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (30 retries left). 2025-09-01 08:37:58,046 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (29 retries left). 2025-09-01 08:38:03,489 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (28 retries left). 2025-09-01 08:38:08,925 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (27 retries left). 2025-09-01 08:38:18,264 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (26 retries left). 2025-09-01 08:38:23,675 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (25 retries left). 2025-09-01 08:38:29,052 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (24 retries left). 2025-09-01 08:38:34,443 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (23 retries left). 2025-09-01 08:38:39,861 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (22 retries left). 2025-09-01 08:38:45,232 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (21 retries left). 2025-09-01 08:38:54,445 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (20 retries left). 2025-09-01 08:38:59,888 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (19 retries left). 2025-09-01 08:39:05,319 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (18 retries left). 2025-09-01 08:39:10,728 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (17 retries left). 2025-09-01 08:39:16,072 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (16 retries left). 2025-09-01 08:39:21,460 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (15 retries left). 2025-09-01 08:39:26,838 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (14 retries left). 2025-09-01 08:39:32,207 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (13 retries left). 2025-09-01 08:39:37,578 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (12 retries left). 2025-09-01 08:39:42,955 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (11 retries left). 2025-09-01 08:39:52,442 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (10 retries left). 2025-09-01 08:39:57,804 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (9 retries left). 2025-09-01 08:40:03,185 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (8 retries left). 2025-09-01 08:40:08,539 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (7 retries left). 2025-09-01 08:40:13,891 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (6 retries left). 2025-09-01 08:40:19,253 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (5 retries left). 2025-09-01 08:40:24,701 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (4 retries left). 2025-09-01 08:40:30,057 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (3 retries left). 2025-09-01 08:40:35,438 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (2 retries left). 2025-09-01 08:40:40,791 p=36392 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until all cassandra node are ready (Status=Up and State=Normal) (1 retries left). 2025-09-01 08:40:46,134 p=36392 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Wait until all cassandra node are ready (Status=Up and State=Normal)] *** 2025-09-01 08:40:46,134 p=36392 u=1002790000 n=ansible INFO| fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "oc --server https://api.ci-op-cl9vhfrj-b2a90.cspilp.interop.ccitredhat.com:6443 --token sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E -n test-oadp-92 exec -it cassandra-0 -- nodetool status", "delta": "0:00:00.173165", "end": "2025-09-01 08:40:46.113616", "msg": "non-zero return code", "rc": 1, "start": "2025-09-01 08:40:45.940451", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\nerror: unable to upgrade connection: container not found (\"cassandra\")", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "error: unable to upgrade connection: container not found (\"cassandra\")"], "stdout": "", "stdout_lines": []} 2025-09-01 08:40:46,135 p=36392 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:40:46,135 p=36392 u=1002790000 n=ansible INFO| localhost : ok=21 changed=8 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0 Run the command: oc get event -n test-oadp-92 2025/09/01 08:40:46 LAST SEEN TYPE REASON OBJECT MESSAGE 7m38s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Warning FailedScheduling pod/cassandra-0 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m4s Normal Scheduled pod/cassandra-0 Successfully assigned test-oadp-92/cassandra-0 to ip-10-0-99-76.ec2.internal 3m4s Normal SuccessfulAttachVolume pod/cassandra-0 AttachVolume.Attach succeeded for volume "pvc-924269c0-a2ab-4fc9-aa47-609f98be410f" 3m2s Normal AddedInterface pod/cassandra-0 Add eth0 [10.128.2.141/23] from ovn-kubernetes 59s Normal Pulling pod/cassandra-0 Pulling image "quay.io/migqe/cassandra:multiarch" 3m2s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 475ms (475ms including waiting). Image size: 307783610 bytes. 59s Normal Created pod/cassandra-0 Created container: cassandra 58s Normal Started pod/cassandra-0 Started container cassandra 2m55s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 577ms (577ms including waiting). Image size: 307783610 bytes. 2s Warning BackOff pod/cassandra-0 Back-off restarting failed container cassandra in pod cassandra-0_test-oadp-92(3599ec06-5637-48e2-8e0a-53c1773c0d5b) 2m33s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 427ms (427ms including waiting). Image size: 307783610 bytes. 118s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 531ms (531ms including waiting). Image size: 307783610 bytes. 59s Normal Pulled pod/cassandra-0 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 387ms (387ms including waiting). Image size: 307783610 bytes. 7m35s Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Warning FailedScheduling pod/cassandra-1 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 3m Normal Scheduled pod/cassandra-1 Successfully assigned test-oadp-92/cassandra-1 to ip-10-0-56-118.ec2.internal 3m Normal SuccessfulAttachVolume pod/cassandra-1 AttachVolume.Attach succeeded for volume "pvc-0cd5bb26-add6-4e29-a8dc-e738e2fde900" 2m53s Normal AddedInterface pod/cassandra-1 Add eth0 [10.131.0.73/23] from ovn-kubernetes 66s Normal Pulling pod/cassandra-1 Pulling image "quay.io/migqe/cassandra:multiarch" 2m53s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 399ms (399ms including waiting). Image size: 307783610 bytes. 65s Normal Created pod/cassandra-1 Created container: cassandra 65s Normal Started pod/cassandra-1 Started container cassandra 2m44s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 540ms (541ms including waiting). Image size: 307783610 bytes. 7s Warning BackOff pod/cassandra-1 Back-off restarting failed container cassandra in pod cassandra-1_test-oadp-92(89095108-9c8c-43c6-986b-d3aad7625e71) 2m24s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 569ms (569ms including waiting). Image size: 307783610 bytes. 115s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 638ms (638ms including waiting). Image size: 307783610 bytes. 66s Normal Pulled pod/cassandra-1 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 509ms (509ms including waiting). Image size: 307783610 bytes. 7m27s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m51s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m51s Warning FailedScheduling pod/cassandra-2 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. 2m51s Normal Scheduled pod/cassandra-2 Successfully assigned test-oadp-92/cassandra-2 to ip-10-0-93-94.ec2.internal 2m51s Normal SuccessfulAttachVolume pod/cassandra-2 AttachVolume.Attach succeeded for volume "pvc-1828dea1-c20f-4187-9d19-1dd0820896c8" 2m43s Normal AddedInterface pod/cassandra-2 Add eth0 [10.129.2.84/23] from ovn-kubernetes 58s Normal Pulling pod/cassandra-2 Pulling image "quay.io/migqe/cassandra:multiarch" 2m43s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 411ms (411ms including waiting). Image size: 307783610 bytes. 57s Normal Created pod/cassandra-2 Created container: cassandra 57s Normal Started pod/cassandra-2 Started container cassandra 2m37s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 429ms (429ms including waiting). Image size: 307783610 bytes. 12s Warning BackOff pod/cassandra-2 Back-off restarting failed container cassandra in pod cassandra-2_test-oadp-92(78063e09-ad8d-49a8-970a-d25a3a265ac9) 2m17s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 431ms (431ms including waiting). Image size: 307783610 bytes. 107s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 412ms (412ms including waiting). Image size: 307783610 bytes. 57s Normal Pulled pod/cassandra-2 Successfully pulled image "quay.io/migqe/cassandra:multiarch" in 467ms (467ms including waiting). Image size: 307783610 bytes. 3m5s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-0 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m5s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-0 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-0" 3m4s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-0 Successfully provisioned volume pvc-924269c0-a2ab-4fc9-aa47-609f98be410f 3m1s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-1 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 3m1s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-1 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-1" 3m1s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-1 Successfully provisioned volume pvc-0cd5bb26-add6-4e29-a8dc-e738e2fde900 2m52s Normal Provisioning persistentvolumeclaim/cassandra-data-cassandra-2 External provisioner is provisioning volume for claim "test-oadp-92/cassandra-data-cassandra-2" 2m52s Normal ExternalProvisioning persistentvolumeclaim/cassandra-data-cassandra-2 Waiting for a volume to be created either by the external provisioner 'openshift-storage.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. 2m51s Normal ProvisioningSucceeded persistentvolumeclaim/cassandra-data-cassandra-2 Successfully provisioned volume pvc-1828dea1-c20f-4187-9d19-1dd0820896c8 3m5s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-0 Pod cassandra-0 in StatefulSet cassandra success 3m5s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-0 in StatefulSet cassandra successful 3m1s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-1 Pod cassandra-1 in StatefulSet cassandra success 3m1s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-1 in StatefulSet cassandra successful 2m52s Normal SuccessfulCreate statefulset/cassandra create Claim cassandra-data-cassandra-2 Pod cassandra-2 in StatefulSet cassandra success 2m52s Normal SuccessfulCreate statefulset/cassandra create Pod cassandra-2 in StatefulSet cassandra successful [FAILED] in [It] - /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:40:46.295 < Exit [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic @ 09/01/25 08:40:46.295 (3m18.531s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:40:46.295 2025/09/01 08:40:46 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 STEP: Get the failed spec name @ 09/01/25 08:40:46.295 2025/09/01 08:40:46 The failed spec name is: Backup hooks tests Pre exec hook [tc-id:OADP-92][interop][smoke] Cassandra app with Restic STEP: Create a folder for all must-gather files if it doesn't exists already @ 09/01/25 08:40:46.295 STEP: Create a folder for the failed spec if it doesn't exists already @ 09/01/25 08:40:46.295 STEP: Run must-gather because the spec failed @ 09/01/25 08:40:46.295 2025/09/01 08:40:46 Log the present working directory path:- /alabama/cspi/e2e 2025/09/01 08:40:46 [adm must-gather --dest-dir /alabama/cspi/e2e/logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic --image registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0] 2025/09/01 08:41:36 Log all the files present in /alabama/cspi/e2e/logs directory 2025/09/01 08:41:36 It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic 2025/09/01 08:41:36 It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application STEP: Find must-gather folder and rename it to a shorter more readable name @ 09/01/25 08:41:36.067 The folder logs/It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic/must-gather already exists, skipping renaming the folder < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:41:36.067 (49.772s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:41:36.067 2025/09/01 08:41:36 Cleaning app 2025/09/01 08:41:36 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025/09/01 08:42:05 2025-09-01 08:41:37,597 p=37718 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:41:37,598 p=37718 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:41:37,858 p=37718 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:41:37,859 p=37718 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:41:38,120 p=37718 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:41:38,120 p=37718 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:41:38,377 p=37718 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:41:38,378 p=37718 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:41:38,397 p=37718 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:41:38,397 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:41:38,414 p=37718 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:41:38,415 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:41:38,427 p=37718 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:41:38,428 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:41:38,730 p=37718 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:41:38,730 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:41:38,759 p=37718 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:41:38,759 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:41:38,776 p=37718 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:41:38,776 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:41:38,778 p=37718 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:41:39,354 p=37718 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:41:39,354 p=37718 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:05,214 p=37718 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra : Remove namespace test-oadp-92] *** 2025-09-01 08:42:05,215 p=37718 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:42:05,215 p=37718 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:05,553 p=37718 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:42:05,554 p=37718 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:42:05.599 (29.532s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:42:05.599 2025/09/01 08:42:05 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:42:05.6 (0s) > Enter [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:42:05.6 < Exit [DeferCleanup (Each)] Pre exec hook @ 09/01/25 08:42:05.627 (28ms) • [FAILED] [832.977 seconds] Backup hooks tests Pre exec hook [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/e2e/hooks/backup_hooks.go:113 [FAILED] Unexpected error: <*errors.Error | 0xc00088e000>: Error during command execution: ansible-playbook error: one or more host failed Command executed: /usr/local/bin/ansible-playbook --extra-vars {"admin_kubeconfig":"/home/jenkins/.kube/config","namespace":"test-oadp-92","non_admin_user":false,"use_role":"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra","user_kubeconfig":"/home/jenkins/.kube/config","with_deploy":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml exit status 2 { context: "(DefaultExecute::Execute)", message: "Error during command execution: ansible-playbook error: one or more host failed\n\nCommand executed: /usr/local/bin/ansible-playbook --extra-vars {\"admin_kubeconfig\":\"/home/jenkins/.kube/config\",\"namespace\":\"test-oadp-92\",\"non_admin_user\":false,\"use_role\":\"/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-cassandra\",\"user_kubeconfig\":\"/home/jenkins/.kube/config\",\"with_deploy\":true} --connection local /alabama/cspi/sample-applications/ansible/main.yml\n\nexit status 2", wrappedErrors: nil, } occurred In [It] at: /alabama/cspi/test_common/backup_restore_app_case.go:46 @ 09/01/25 08:40:46.295 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSS ------------------------------ Incremental backup restore tests Incremental restore pod count [tc-id:OADP-165][interop] Todolist app with CSI - policy: update /alabama/cspi/e2e/incremental_restore/backup_restore_incremental.go:94 > Enter [BeforeEach] Incremental backup restore tests @ 09/01/25 08:42:05.627 < Exit [BeforeEach] Incremental backup restore tests @ 09/01/25 08:42:05.637 (9ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:42:05.637 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:42:05.637 (0s) > Enter [It] [tc-id:OADP-165][interop] Todolist app with CSI - policy: update @ 09/01/25 08:42:05.637 2025/09/01 08:42:05 Delete all downloadrequest mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-2b82ad65-9543-4b6a-89e6-b9edc6a664d1 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-2e7655df-8f54-4d35-8a87-a091cb00f2d2 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-3a69b85c-fb0a-4f32-8091-661c027ef9cb mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-9c3e1539-0120-45b1-85ef-aa4070aa4c20 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-b0c435f3-32bc-4b05-a816-8293b96818d2 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-b15a3628-14a9-4327-bd7a-d389411c47f2 mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-bcd962e8-ee8a-451c-a693-469a9fa08b7b mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-c7b34144-0ca1-4845-835a-4146a412addf mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7-fd672e60-cad8-41c7-ae49-80b9d5d893e2 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-0ee5f240-ae2e-4d0c-8938-8d826af29da6 ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-442737e9-098b-4f5f-928d-de573141deee ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-8546fd5d-a8bf-4d7d-b9a3-f7d64769d31d ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7-f288394f-6095-43ab-9606-b0b95252212c ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-3d4861c8-b048-456e-b18d-2e1762ce8e01 ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7-acffb97f-5b4b-4aef-905c-139f5a4bb0a5 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-14b52caf-7d6f-4b0a-b49b-de5edb1a3067 ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7-99a18d07-6241-4e71-a593-af6df01034d4 STEP: Create DPA CR @ 09/01/25 08:42:07.062 2025/09/01 08:42:07 csi 2025/09/01 08:42:07 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "683032b8-ea5f-44a2-908b-544fce19d73f", "resourceVersion": "120731", "generation": 1, "creationTimestamp": "2025-09-01T08:42:07Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:42:07Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt", "csi" ], "disableFsBackup": false } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:42:07.087 2025/09/01 08:42:07 Waiting for velero pod to be running 2025/09/01 08:42:12 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Installing application for case todolist-backup @ 09/01/25 08:42:12.136 2025/09/01 08:42:12 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check namespace todolist-mariadb-csi-policy-update] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Create namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Ensure namespace todolist-mariadb-csi-policy-update is present] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Deploy todolist-mysql application] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check mysql pod status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod status] *** ok: [localhost] FAILED - RETRYING: [localhost]: Check todolist pod status (30 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (29 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (28 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (27 retries left). FAILED - RETRYING: [localhost]: Check todolist pod status (26 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until service is ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Add additional items todo list] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait for 30 seconds] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:43:12 2025-09-01 08:42:13,633 p=37945 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:42:13,633 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:13,883 p=37945 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:42:13,884 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:14,135 p=37945 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:42:14,136 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:14,392 p=37945 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:42:14,392 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:14,406 p=37945 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:42:14,406 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:14,424 p=37945 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:42:14,424 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:14,435 p=37945 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:42:14,436 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:42:14,741 p=37945 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:42:14,741 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:14,769 p=37945 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:42:14,769 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:14,787 p=37945 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:42:14,787 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:14,788 p=37945 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:42:15,344 p=37945 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:42:15,344 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:16,171 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check namespace todolist-mariadb-csi-policy-update] *** 2025-09-01 08:42:16,172 p=37945 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:42:16,172 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:16,561 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Create namespace todolist-mariadb-csi-policy-update] *** 2025-09-01 08:42:16,561 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:17,209 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Ensure namespace todolist-mariadb-csi-policy-update is present] *** 2025-09-01 08:42:17,209 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:18,324 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Deploy todolist-mysql application] *** 2025-09-01 08:42:18,324 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:18,977 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check mysql pod status (30 retries left). 2025-09-01 08:42:22,641 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod status] *** 2025-09-01 08:42:22,641 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:23,310 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (30 retries left). 2025-09-01 08:42:26,927 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (29 retries left). 2025-09-01 08:42:30,545 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (28 retries left). 2025-09-01 08:42:34,160 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (27 retries left). 2025-09-01 08:42:37,798 p=37945 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check todolist pod status (26 retries left). 2025-09-01 08:42:41,441 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod status] *** 2025-09-01 08:42:41,441 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:42:41,765 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until service is ready for connections] *** 2025-09-01 08:42:41,766 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:42,064 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-09-01 08:42:42,064 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:42,562 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Add additional items todo list] *** 2025-09-01 08:42:42,562 p=37945 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:42:42,581 p=37945 u=1002790000 n=ansible INFO| Pausing for 30 seconds 2025-09-01 08:43:12,585 p=37945 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait for 30 seconds] *** 2025-09-01 08:43:12,585 p=37945 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:12,620 p=37945 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:43:12,620 p=37945 u=1002790000 n=ansible INFO| localhost : ok=25 changed=9 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:43:12.686 2025/09/01 08:43:12 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/09/01 08:43:21 2025-09-01 08:43:14,591 p=38425 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:43:14,591 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:14,896 p=38425 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:43:14,896 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:15,193 p=38425 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:43:15,193 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:15,505 p=38425 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:43:15,505 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:15,522 p=38425 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:43:15,522 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:15,542 p=38425 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:43:15,542 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:15,558 p=38425 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:43:15,558 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:43:15,988 p=38425 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:43:15,988 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:16,025 p=38425 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:43:16,026 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:16,049 p=38425 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:43:16,050 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:16,052 p=38425 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:43:16,712 p=38425 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:43:16,713 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:17,015 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-09-01 08:43:17,031 p=38425 u=1002790000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-09-01 08:43:18,043 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-09-01 08:43:18,044 p=38425 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:43:18,044 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:18,450 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-09-01 08:43:18,450 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:19,304 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-09-01 08:43:19,304 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:19,734 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-09-01 08:43:19,734 p=38425 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:43:20,824 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-09-01 08:43:20,824 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:21,351 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-09-01 08:43:21,351 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:21,696 p=38425 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-09-01 08:43:21,697 p=38425 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:43:21,702 p=38425 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:43:21,702 p=38425 u=1002790000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:43:21.757 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:43:21 The 'openshift-storage' namespace exists 2025/09/01 08:43:21 Checking default storage class count 2025/09/01 08:43:21 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:43:21 Snapclass 'example-snapclass' doesn't exist, creating 2025/09/01 08:43:21 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:43:21 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:43:21 {{ } { } [{{ } {mysql todolist-mariadb-csi-policy-update d4dbbdbd-bf9f-4232-8280-4054e10cf6bf 121036 0 2025-09-01 08:42:17 +0000 UTC map[app:mysql] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-1756716138 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:42:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{1073741824 0} {} 1Gi BinarySI}]} pvc-d4dbbdbd-bf9f-4232-8280-4054e10cf6bf 0xc0005da200 0xc0005da270 nil nil } {Bound [ReadWriteOnce] map[storage:{{1073741824 0} {} 1Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:43:21.995 2025/09/01 08:43:22 Wait until backup todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:43:42 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/09/01 08:43:42 Run velero describe on the backup 2025/09/01 08:43:42 [./velero describe backup todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 -n openshift-adp --details --insecure-skip-tls-verify] 2025/09/01 08:43:42 Exec stderr: "" 2025/09/01 08:43:42 Name: todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: todolist-mariadb-csi-policy-update Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-09-01 08:43:22 +0000 UTC Completed: 2025-09-01 08:43:30 +0000 UTC Expiration: 2025-10-01 08:43:22 +0000 UTC Total items to be backed up: 65 Items backed up: 65 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-fdrg8: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-fdrg8/2025-09-01T08:43:28Z Items to Update: volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-fdrg8 volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-b326c174-5dbe-4cfe-9f76-f31cd88bfb5f Phase: Completed Created: 2025-09-01 08:43:28 +0000 UTC Started: 2025-09-01 08:43:28 +0000 UTC Updated: 2025-09-01 08:43:29 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - reclaimspacecronjobs.csiaddons.openshift.io - securitycontextconstraints.security.openshift.io apps/v1/Deployment: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist apps/v1/ReplicaSet: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb - todolist-mariadb-csi-policy-update/todolist-6d856b79d authorization.openshift.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - todolist-mariadb-csi-policy-update/mysql-1756716138 discovery.k8s.io/v1/EndpointSlice: - todolist-mariadb-csi-policy-update/mysql-94c5c - todolist-mariadb-csi-policy-update/todolist-h2rhk rbac.authorization.k8s.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers route.openshift.io/v1/Route: - todolist-mariadb-csi-policy-update/todolist-route security.openshift.io/v1/SecurityContextConstraints: - todolist-mariadb-csi-policy-update-scc snapshot.storage.k8s.io/v1/VolumeSnapshot: - todolist-mariadb-csi-policy-update/velero-mysql-fdrg8 snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-b326c174-5dbe-4cfe-9f76-f31cd88bfb5f v1/ConfigMap: - todolist-mariadb-csi-policy-update/kube-root-ca.crt - todolist-mariadb-csi-policy-update/openshift-service-ca.crt v1/Endpoints: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/Event: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0526b91440 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05308de9d6 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0533c2d556 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0558893623 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05a64e0729 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05a7cfcd85 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05b2a994ff - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05b78098d5 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb.18611c0526a43d42 - todolist-mariadb-csi-policy-update/mysql.18611c052279d2dd - todolist-mariadb-csi-policy-update/mysql.18611c0522abd75d - todolist-mariadb-csi-policy-update/mysql.18611c05259a3ed3 - todolist-mariadb-csi-policy-update/mysql.18611c053075768c - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c052f903c94 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c054df69d27 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c054fff9237 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c055b5dea4d - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c05607417ca - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c09d65898f9 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a254eaa4d - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a30828cf3 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a3579552a - todolist-mariadb-csi-policy-update/todolist-6d856b79d.18611c052ddd5f6f - todolist-mariadb-csi-policy-update/todolist.18611c052c386cec v1/Namespace: - todolist-mariadb-csi-policy-update v1/PersistentVolume: - pvc-d4dbbdbd-bf9f-4232-8280-4054e10cf6bf v1/PersistentVolumeClaim: - todolist-mariadb-csi-policy-update/mysql v1/Pod: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4 v1/Secret: - todolist-mariadb-csi-policy-update/builder-dockercfg-sh7ct - todolist-mariadb-csi-policy-update/default-dockercfg-kcktt - todolist-mariadb-csi-policy-update/deployer-dockercfg-v2cw6 - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa-dockercfg-2hf5w v1/Service: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/ServiceAccount: - todolist-mariadb-csi-policy-update/builder - todolist-mariadb-csi-policy-update/default - todolist-mariadb-csi-policy-update/deployer - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa Backup Volumes: Velero-Native Snapshots: CSI Snapshots: todolist-mariadb-csi-policy-update/mysql: Snapshot: Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-fdrg8/2025-09-01T08:43:28Z Snapshot Content Name: snapcontent-b326c174-5dbe-4cfe-9f76-f31cd88bfb5f Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000002-8f74df89-bf26-4d9f-b5c9-199d372cdbfc Snapshot Size (bytes): 1073741824 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:43:42.826 2025/09/01 08:43:42 Backup for case todolist-backup succeeded STEP: Scale application @ 09/01/25 08:43:42.884 2025/09/01 08:43:42 Scaling deployment 'todolist' to 2 replicas 2025/09/01 08:43:42 Deployment updated successfully 2025/09/01 08:43:42 number of running pods: 1 2025/09/01 08:43:47 number of running pods: 1 2025/09/01 08:43:52 Application reached target number of replicas: 2 STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:43:52.951 Run the command: oc get ns openshift-storage &> /dev/null && echo true || echo false 2025/09/01 08:43:53 The 'openshift-storage' namespace exists 2025/09/01 08:43:53 Checking default storage class count 2025/09/01 08:43:53 Using the CSI driver: openshift-storage.rbd.csi.ceph.com 2025/09/01 08:43:53 Snapclass 'example-snapclass' already exists, skip creating 2025/09/01 08:43:53 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:43:53 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:43:53 {{ } { } [{{ } {mysql todolist-mariadb-csi-policy-update d4dbbdbd-bf9f-4232-8280-4054e10cf6bf 122457 0 2025-09-01 08:42:17 +0000 UTC map[app:mysql] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-1756716138 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:42:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:42:18 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{1073741824 0} {} 1Gi BinarySI}]} pvc-d4dbbdbd-bf9f-4232-8280-4054e10cf6bf 0xc000c1c550 0xc000c1c560 nil nil } {Bound [ReadWriteOnce] map[storage:{{1073741824 0} {} 1Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:43:53.277 2025/09/01 08:43:53 Wait until backup todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:44:13 Verify the Backup has CSIVolumeSnapshotsAttempted and CSIVolumeSnapshotsCompleted field on status 2025/09/01 08:44:13 Run velero describe on the backup 2025/09/01 08:44:13 [./velero describe backup todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 -n openshift-adp --details --insecure-skip-tls-verify] 2025/09/01 08:44:14 Exec stderr: "" 2025/09/01 08:44:14 Name: todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 Namespace: openshift-adp Labels: velero.io/storage-location=ts-dpa-1 Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: todolist-mariadb-csi-policy-update Excluded: Resources: Included: * Excluded: Cluster-scoped: auto Label selector: Or label selector: Storage Location: ts-dpa-1 Velero-Native Snapshot PVs: auto Snapshot Move Data: false Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: Backup Format Version: 1.1.0 Started: 2025-09-01 08:43:53 +0000 UTC Completed: 2025-09-01 08:44:01 +0000 UTC Expiration: 2025-10-01 08:43:53 +0000 UTC Total items to be backed up: 81 Items backed up: 81 Backup Item Operations: Operation for volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-jpcb4: Backup Item Action Plugin: velero.io/csi-volumesnapshot-backupper Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-jpcb4/2025-09-01T08:43:59Z Items to Update: volumesnapshots.snapshot.storage.k8s.io todolist-mariadb-csi-policy-update/velero-mysql-jpcb4 volumesnapshotcontents.snapshot.storage.k8s.io /snapcontent-fda56b09-2774-40ea-9377-9446dbb43f9e Phase: Completed Created: 2025-09-01 08:43:59 +0000 UTC Started: 2025-09-01 08:43:59 +0000 UTC Updated: 2025-09-01 08:44:00 +0000 UTC Resource List: apiextensions.k8s.io/v1/CustomResourceDefinition: - reclaimspacecronjobs.csiaddons.openshift.io - securitycontextconstraints.security.openshift.io apps/v1/Deployment: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist apps/v1/ReplicaSet: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb - todolist-mariadb-csi-policy-update/todolist-6d856b79d authorization.openshift.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers csiaddons.openshift.io/v1alpha1/ReclaimSpaceCronJob: - todolist-mariadb-csi-policy-update/mysql-1756716138 discovery.k8s.io/v1/EndpointSlice: - todolist-mariadb-csi-policy-update/mysql-94c5c - todolist-mariadb-csi-policy-update/todolist-h2rhk rbac.authorization.k8s.io/v1/RoleBinding: - todolist-mariadb-csi-policy-update/admin - todolist-mariadb-csi-policy-update/system:deployers - todolist-mariadb-csi-policy-update/system:image-builders - todolist-mariadb-csi-policy-update/system:image-pullers route.openshift.io/v1/Route: - todolist-mariadb-csi-policy-update/todolist-route security.openshift.io/v1/SecurityContextConstraints: - todolist-mariadb-csi-policy-update-scc snapshot.storage.k8s.io/v1/VolumeSnapshot: - todolist-mariadb-csi-policy-update/velero-mysql-jpcb4 snapshot.storage.k8s.io/v1/VolumeSnapshotClass: - example-snapclass snapshot.storage.k8s.io/v1/VolumeSnapshotContent: - snapcontent-fda56b09-2774-40ea-9377-9446dbb43f9e v1/ConfigMap: - todolist-mariadb-csi-policy-update/kube-root-ca.crt - todolist-mariadb-csi-policy-update/openshift-service-ca.crt v1/Endpoints: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/Event: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0526b91440 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05308de9d6 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0533c2d556 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c0558893623 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05a64e0729 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05a7cfcd85 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05b2a994ff - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf.18611c05b78098d5 - todolist-mariadb-csi-policy-update/mysql-86bc866cfb.18611c0526a43d42 - todolist-mariadb-csi-policy-update/mysql.18611c052279d2dd - todolist-mariadb-csi-policy-update/mysql.18611c0522abd75d - todolist-mariadb-csi-policy-update/mysql.18611c05259a3ed3 - todolist-mariadb-csi-policy-update/mysql.18611c053075768c - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c052f903c94 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c054df69d27 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c054fff9237 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c055b5dea4d - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c05607417ca - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c09d65898f9 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a254eaa4d - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a30828cf3 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4.18611c0a3579552a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c18e7e47094 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c19133c7392 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1914a14b12 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1a4d49bb3f - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1a57aa70b5 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1a5caa927a - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1a7526f7f4 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1ab731429b - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1ac1c7d6d1 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5.18611c1ac6d41684 - todolist-mariadb-csi-policy-update/todolist-6d856b79d.18611c052ddd5f6f - todolist-mariadb-csi-policy-update/todolist-6d856b79d.18611c18e79fa0c1 - todolist-mariadb-csi-policy-update/todolist.18611c052c386cec - todolist-mariadb-csi-policy-update/todolist.18611c18e5747b58 - todolist-mariadb-csi-policy-update/velero-mysql-fdrg8.18611c147aa5b2ff - todolist-mariadb-csi-policy-update/velero-mysql-fdrg8.18611c1504507de4 - todolist-mariadb-csi-policy-update/velero-mysql-fdrg8.18611c1504513005 v1/Namespace: - todolist-mariadb-csi-policy-update v1/PersistentVolume: - pvc-d4dbbdbd-bf9f-4232-8280-4054e10cf6bf v1/PersistentVolumeClaim: - todolist-mariadb-csi-policy-update/mysql v1/Pod: - todolist-mariadb-csi-policy-update/mysql-86bc866cfb-c9lcf - todolist-mariadb-csi-policy-update/todolist-6d856b79d-tsbb4 - todolist-mariadb-csi-policy-update/todolist-6d856b79d-xgfn5 v1/Secret: - todolist-mariadb-csi-policy-update/builder-dockercfg-sh7ct - todolist-mariadb-csi-policy-update/default-dockercfg-kcktt - todolist-mariadb-csi-policy-update/deployer-dockercfg-v2cw6 - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa-dockercfg-2hf5w v1/Service: - todolist-mariadb-csi-policy-update/mysql - todolist-mariadb-csi-policy-update/todolist v1/ServiceAccount: - todolist-mariadb-csi-policy-update/builder - todolist-mariadb-csi-policy-update/default - todolist-mariadb-csi-policy-update/deployer - todolist-mariadb-csi-policy-update/todolist-mariadb-csi-policy-update-sa Backup Volumes: Velero-Native Snapshots: CSI Snapshots: todolist-mariadb-csi-policy-update/mysql: Snapshot: Operation ID: todolist-mariadb-csi-policy-update/velero-mysql-jpcb4/2025-09-01T08:43:59Z Snapshot Content Name: snapcontent-fda56b09-2774-40ea-9377-9446dbb43f9e Storage Snapshot ID: 0001-0011-openshift-storage-0000000000000002-bea81380-365a-4289-8670-290edfa3ada2 Snapshot Size (bytes): 1073741824 CSI Driver: openshift-storage.rbd.csi.ceph.com Result: succeeded Pod Volume Backups: HooksAttempted: 0 HooksFailed: 0 STEP: Verify backup todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:44:14.074 2025/09/01 08:44:14 Backup for case todolist-backup succeeded STEP: Cleanup application and restore 1st backup @ 09/01/25 08:44:14.169 STEP: Delete the appplication resources todolist-backup @ 09/01/25 08:44:14.169 STEP: Cleanup Application for case todolist-backup @ 09/01/25 08:44:14.169 2025/09/01 08:44:14 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025/09/01 08:44:39 2025-09-01 08:44:15,800 p=38788 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:44:15,800 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:16,053 p=38788 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:44:16,053 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:16,311 p=38788 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:44:16,311 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:16,579 p=38788 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:44:16,580 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:16,596 p=38788 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:44:16,596 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:16,617 p=38788 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:44:16,618 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:16,629 p=38788 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:44:16,629 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:44:16,955 p=38788 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:44:16,955 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:16,983 p=38788 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:44:16,983 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:17,001 p=38788 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:44:17,001 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:17,003 p=38788 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:44:17,566 p=38788 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:44:17,566 p=38788 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:44:38,403 p=38788 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** 2025-09-01 08:44:38,403 p=38788 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:44:38,404 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:39,261 p=38788 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** 2025-09-01 08:44:39,262 p=38788 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:44:39,439 p=38788 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:44:39,439 p=38788 u=1002790000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025/09/01 08:44:39 Creating restore todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 for case todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 STEP: Create restore todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 from backup todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:44:39.482 2025/09/01 08:44:39 Wait until restore todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7 is complete restore phase: Finalizing restore phase: Completed STEP: Verify restore todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:44:59.515 STEP: Verify Application restore @ 09/01/25 08:44:59.52 STEP: Verify Application deployment for case todolist-backup @ 09/01/25 08:44:59.52 2025/09/01 08:44:59 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/09/01 08:45:06 2025-09-01 08:45:01,007 p=39021 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:45:01,007 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:01,261 p=39021 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:45:01,261 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:01,510 p=39021 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:45:01,510 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:01,760 p=39021 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:45:01,760 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:01,775 p=39021 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:45:01,775 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:01,793 p=39021 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:45:01,793 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:01,805 p=39021 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:45:01,805 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:45:02,123 p=39021 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:45:02,124 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:02,152 p=39021 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:45:02,152 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:02,169 p=39021 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:45:02,169 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:02,171 p=39021 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:45:02,735 p=39021 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:45:02,735 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:02,956 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-09-01 08:45:02,964 p=39021 u=1002790000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-09-01 08:45:03,772 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-09-01 08:45:03,772 p=39021 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:45:03,772 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:04,078 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-09-01 08:45:04,078 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:04,796 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-09-01 08:45:04,796 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:05,127 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-09-01 08:45:05,128 p=39021 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:06,016 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-09-01 08:45:06,017 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:06,425 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-09-01 08:45:06,426 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:06,733 p=39021 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-09-01 08:45:06,734 p=39021 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:06,739 p=39021 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:45:06,739 p=39021 u=1002790000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025/09/01 08:45:06 Application reached target number of replicas: 1 STEP: Restore 2nd backup with existingRessourcePolicy: update @ 09/01/25 08:45:06.791 2025/09/01 08:45:06 Creating restore todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 for case todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 STEP: Create restore todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 from backup todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:45:06.791 2025/09/01 08:45:06 Wait until restore todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7 is complete restore phase: Completed STEP: Verify restore todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:45:16.813 STEP: Verify Application restore @ 09/01/25 08:45:16.817 STEP: Verify Application deployment for case todolist-backup @ 09/01/25 08:45:16.817 2025/09/01 08:45:16 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=23  changed=6  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025/09/01 08:45:24 2025-09-01 08:45:18,316 p=39367 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:45:18,316 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:18,564 p=39367 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:45:18,564 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:18,813 p=39367 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:45:18,813 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:19,062 p=39367 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:45:19,062 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:19,077 p=39367 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:45:19,077 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:19,095 p=39367 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:45:19,095 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:19,107 p=39367 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:45:19,107 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:45:19,417 p=39367 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:45:19,417 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:19,445 p=39367 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:45:19,446 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:19,464 p=39367 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:45:19,464 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:19,466 p=39367 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:45:20,027 p=39367 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:45:20,027 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:20,250 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Validating todolist] *** 2025-09-01 08:45:20,260 p=39367 u=1002790000 n=ansible INFO| included: /alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb/tasks/validation_task.yml for localhost 2025-09-01 08:45:21,069 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check mysql pod is running] *** 2025-09-01 08:45:21,070 p=39367 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:45:21,070 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:21,424 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until mysql service ready for connections] *** 2025-09-01 08:45:21,425 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:22,126 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Check todolist pod is running] *** 2025-09-01 08:45:22,126 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:22,447 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Wait until todolist API server starts] *** 2025-09-01 08:45:22,448 p=39367 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:23,356 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Obtain todolist route] *** 2025-09-01 08:45:23,356 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:23,758 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find 1st database item] *** 2025-09-01 08:45:23,758 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:24,063 p=39367 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Find the string in incomplete items] *** 2025-09-01 08:45:24,063 p=39367 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:24,068 p=39367 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:45:24,068 p=39367 u=1002790000 n=ansible INFO| localhost : ok=23 changed=6 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025/09/01 08:45:24 Application reached target number of replicas: 2 < Exit [It] [tc-id:OADP-165][interop] Todolist app with CSI - policy: update @ 09/01/25 08:45:24.122 (3m18.486s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:45:24.122 2025/09/01 08:45:24 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:45:24.122 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.122 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.132 (10ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.132 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.136 (4ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.136 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.136 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.136 2025/09/01 08:45:24 Cleaning setup resources for the backup 2025/09/01 08:45:24 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:45:24 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd 2025/09/01 08:45:24 Deleting VolumeSnapshotClass 'example-snapclass' < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.159 (23ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.159 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.159 (0s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.159 2025/09/01 08:45:24 Cleaning setup resources for the backup 2025/09/01 08:45:24 Setting new default StorageClass 'odf-operator-ceph-rbd' 2025/09/01 08:45:24 Checking default storage class count Skipping creation of StorageClass The current StorageClass: odf-operator-ceph-rbd matches the new StorageClass: odf-operator-ceph-rbd < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.251 (92ms) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:24.251 2025/09/01 08:45:24 Cleaning app 2025/09/01 08:45:24 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=17  changed=6  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025/09/01 08:45:49 2025-09-01 08:45:25,724 p=39714 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:45:25,725 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:25,974 p=39714 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:45:25,974 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:26,223 p=39714 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:45:26,223 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:26,474 p=39714 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:45:26,474 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:26,487 p=39714 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:45:26,487 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:26,505 p=39714 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:45:26,505 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:26,516 p=39714 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:45:26,517 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:45:26,818 p=39714 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:45:26,818 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:26,846 p=39714 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:45:26,846 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:26,866 p=39714 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:45:26,866 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:26,867 p=39714 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:45:27,425 p=39714 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:45:27,426 p=39714 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:48,241 p=39714 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove namespace todolist-mariadb-csi-policy-update] *** 2025-09-01 08:45:48,241 p=39714 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:45:48,241 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:49,201 p=39714 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-todolist-mariadb : Remove todolist-mariadb-csi-policy-update SCC] *** 2025-09-01 08:45:49,201 p=39714 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:49,378 p=39714 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:45:49,378 p=39714 u=1002790000 n=ansible INFO| localhost : ok=17 changed=6 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:49.422 (25.171s) > Enter [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:49.422 < Exit [DeferCleanup (Each)] Incremental restore pod count @ 09/01/25 08:45:49.462 (39ms) • [223.834 seconds] ------------------------------ SSSSSSSS ------------------------------ [skip-disconnected] Restore hooks tests Successful Init hook [tc-id:OADP-164][interop][smoke] MySQL app with Restic /alabama/cspi/e2e/hooks/restore_hooks.go:134 > Enter [BeforeEach] [skip-disconnected] Restore hooks tests @ 09/01/25 08:45:49.462 < Exit [BeforeEach] [skip-disconnected] Restore hooks tests @ 09/01/25 08:45:49.47 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:45:49.47 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:45:49.47 (0s) > Enter [It] [tc-id:OADP-164][interop][smoke] MySQL app with Restic @ 09/01/25 08:45:49.47 2025/09/01 08:45:49 Delete all downloadrequest todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7-a2ce6440-e9fe-4963-9c80-5fd82d0ba8de todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7-ae65da9c-d294-4777-b96b-8024eeb537a3 todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7-fe38a384-8a2d-4c03-8913-f1a73b219fde todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7-252d5c98-03f0-44fc-ac28-8ecdb4aa52b0 todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7-4cff5bc7-3cc9-451e-8e61-a3ef5c2b2215 todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7-6977ecb4-a9c3-46bc-bbea-46fce372d154 STEP: Create DPA CR @ 09/01/25 08:45:49.582 2025/09/01 08:45:49 restic 2025/09/01 08:45:49 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "63fa07dd-e9f8-4fbe-b8a4-3be3884434f3", "resourceVersion": "125249", "generation": 1, "creationTimestamp": "2025-09-01T08:45:49Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:45:49Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:45:49.608 2025/09/01 08:45:49 Waiting for velero pod to be running 2025/09/01 08:45:49 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:45:49 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "63fa07dd-e9f8-4fbe-b8a4-3be3884434f3", "resourceVersion": "125249", "generation": 1, "creationTimestamp": "2025-09-01T08:45:49Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:45:49Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:45:54.634 2025/09/01 08:45:54 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql-hooks-e2e @ 09/01/25 08:45:54.649 2025/09/01 08:45:54 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-164] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:46:58 2025-09-01 08:45:56,126 p=39946 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:45:56,127 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:56,383 p=39946 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:45:56,383 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:56,640 p=39946 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:45:56,640 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:56,896 p=39946 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:45:56,896 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:56,910 p=39946 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:45:56,910 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:56,929 p=39946 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:45:56,929 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:56,940 p=39946 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:45:56,941 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:45:57,246 p=39946 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:45:57,246 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:57,273 p=39946 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:45:57,274 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:57,290 p=39946 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:45:57,291 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:57,293 p=39946 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:45:57,864 p=39946 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:45:57,864 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:58,685 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-164] *** 2025-09-01 08:45:58,686 p=39946 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:45:58,686 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:45:59,056 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-09-01 08:45:59,056 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:45:59,995 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-09-01 08:45:59,995 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:00,682 p=39946 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-09-01 08:46:06,318 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-09-01 08:46:06,318 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:46:06,979 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-09-01 08:46:06,979 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:07,283 p=39946 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-09-01 08:46:12,574 p=39946 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-09-01 08:46:17,859 p=39946 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-09-01 08:46:23,154 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:46:23,154 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:24,971 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-09-01 08:46:24,971 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:27,757 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-09-01 08:46:27,757 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:28,418 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-09-01 08:46:28,418 p=39946 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:46:28,435 p=39946 u=1002790000 n=ansible INFO| Pausing for 30 seconds 2025-09-01 08:46:58,438 p=39946 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-09-01 08:46:58,438 p=39946 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:46:58,548 p=39946 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:46:58,549 p=39946 u=1002790000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:46:58.593 2025/09/01 08:46:58 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:47:04 2025-09-01 08:47:00,079 p=40502 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:47:00,080 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:00,328 p=40502 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:47:00,328 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:00,584 p=40502 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:47:00,585 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:00,841 p=40502 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:47:00,841 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:00,855 p=40502 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:47:00,855 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:00,876 p=40502 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:47:00,877 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:00,889 p=40502 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:47:00,889 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:47:01,223 p=40502 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:47:01,223 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:01,254 p=40502 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:47:01,254 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:01,272 p=40502 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:47:01,272 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:01,274 p=40502 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:47:01,828 p=40502 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:47:01,828 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:02,838 p=40502 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:47:02,838 p=40502 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:03,249 p=40502 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:47:03,249 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:03,742 p=40502 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:47:03,742 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:04,393 p=40502 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:47:04,393 p=40502 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:04,397 p=40502 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:47:04,397 p=40502 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/09/01 08:47:04 ExtractTarGz: Create file /tmp/tempDir1738581567/world-db/world.sql 2025/09/01 08:47:05 2025/09/01 08:47:05 {{ } { } [{{ } {mysql-data test-oadp-164 9d85b2c5-51de-42f7-bf71-dd647e11682a 125593 0 2025-09-01 08:45:59 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1756716360 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:45:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-9d85b2c5-51de-42f7-bf71-dd647e11682a 0xc001690e00 0xc001690e10 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-164 67ce3e62-b7c1-4130-aab1-5d5042b35356 125594 0 2025-09-01 08:45:59 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1756716360 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:45:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:46:00 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-67ce3e62-b7c1-4130-aab1-5d5042b35356 0xc001690f70 0xc001690f80 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:47:05.035 2025/09/01 08:47:05 Wait until backup mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:47:25 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/09/01 08:47:25 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-09-01T08:47:09Z" generateName: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/backup-name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca velero.io/pvc-uid: 67ce3e62-b7c1-4130-aab1-5d5042b35356 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"84a7437d-19d3-462e-b3fc-82b507ebe2ca"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:47:09Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:47:17Z" name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7-nnnrf namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca resourceVersion: "126755" uid: cedd2198-a8bb-4054-8371-9d2c45226e3d spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-m4rm8 namespace: test-oadp-164 uid: 31141762-211d-48ef-9fc1-b4348b4837cc repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-164 tags: backup: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 backup-uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca ns: test-oadp-164 pod: mysql-64c9d6466-m4rm8 pod-uid: 31141762-211d-48ef-9fc1-b4348b4837cc pvc-uid: 67ce3e62-b7c1-4130-aab1-5d5042b35356 volume: mysql-data1 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:47:17Z" path: /host_pods/31141762-211d-48ef-9fc1-b4348b4837cc/volumes/kubernetes.io~csi/pvc-67ce3e62-b7c1-4130-aab1-5d5042b35356/mount phase: Completed progress: bytesDone: 105256269 totalBytes: 105256269 snapshotID: dd3e5f95 startTimestamp: "2025-09-01T08:47:15Z" 2025/09/01 08:47:25 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-09-01T08:47:09Z" generateName: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/backup-name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca velero.io/pvc-uid: 9d85b2c5-51de-42f7-bf71-dd647e11682a managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"84a7437d-19d3-462e-b3fc-82b507ebe2ca"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:47:09Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:47:12Z" name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7-wjpbl namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca resourceVersion: "126694" uid: eadff392-d923-43a5-8323-5c3ad75c1a83 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-m4rm8 namespace: test-oadp-164 uid: 31141762-211d-48ef-9fc1-b4348b4837cc repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-164 tags: backup: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 backup-uid: 84a7437d-19d3-462e-b3fc-82b507ebe2ca ns: test-oadp-164 pod: mysql-64c9d6466-m4rm8 pod-uid: 31141762-211d-48ef-9fc1-b4348b4837cc pvc-uid: 9d85b2c5-51de-42f7-bf71-dd647e11682a volume: mysql-data uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-09-01T08:47:12Z" path: /host_pods/31141762-211d-48ef-9fc1-b4348b4837cc/volumes/kubernetes.io~csi/pvc-9d85b2c5-51de-42f7-bf71-dd647e11682a/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: 07ee6d59 startTimestamp: "2025-09-01T08:47:09Z" STEP: Verify backup mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:47:25.059 2025/09/01 08:47:25 Backup for case mysql-hooks-e2e succeeded STEP: Delete the appplication resources mysql-hooks-e2e @ 09/01/25 08:47:25.125 STEP: Cleanup Application for case mysql-hooks-e2e @ 09/01/25 08:47:25.125 2025/09/01 08:47:25 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:47:49 2025-09-01 08:47:26,615 p=40827 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:47:26,615 p=40827 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:26,867 p=40827 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:47:26,867 p=40827 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:27,113 p=40827 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:47:27,113 p=40827 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:27,362 p=40827 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:47:27,362 p=40827 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:27,376 p=40827 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:47:27,376 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:27,394 p=40827 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:47:27,394 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:27,405 p=40827 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:47:27,405 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:47:27,717 p=40827 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:47:27,717 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:27,745 p=40827 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:47:27,745 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:27,763 p=40827 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:47:27,763 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:27,765 p=40827 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:47:28,329 p=40827 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:47:28,329 p=40827 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:47:49,171 p=40827 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** 2025-09-01 08:47:49,171 p=40827 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:47:49,171 p=40827 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:47:49,445 p=40827 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:47:49,445 p=40827 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/09/01 08:47:49 Creating restore mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 for case mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 STEP: Create restore mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 from backup mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:47:49.495 2025/09/01 08:47:49 Wait until restore mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 is complete restore phase: InProgress restore phase: InProgress restore phase: Finalizing restore phase: Completed 2025/09/01 08:48:29 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/09/01 08:48:29 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/09/01 08:48:29 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:47:52Z" generateName: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/pod-uid: 7b22d44d-cca8-4ea5-817f-8760631ed7ad velero.io/pvc-uid: d34da1ad-0f22-4d0a-9080-4acd6975554e velero.io/restore-name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 903b1acf-6b59-4156-87f3-7593d46a1a6f managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"903b1acf-6b59-4156-87f3-7593d46a1a6f"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:47:52Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:48:10Z" name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7-kh7r7 namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 uid: 903b1acf-6b59-4156-87f3-7593d46a1a6f resourceVersion: "127710" uid: 9fb226b8-54d2-4714-a121-fffce8150982 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-m4rm8 namespace: test-oadp-164 uid: 7b22d44d-cca8-4ea5-817f-8760631ed7ad repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-164 snapshotID: dd3e5f95 sourceNamespace: test-oadp-164 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:48:10Z" phase: Completed progress: bytesDone: 105256269 totalBytes: 105256269 startTimestamp: "2025-09-01T08:48:08Z" 2025/09/01 08:48:29 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:47:52Z" generateName: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/pod-uid: 7b22d44d-cca8-4ea5-817f-8760631ed7ad velero.io/pvc-uid: f5f0ce80-6fcf-438b-a358-fb096b49e6a8 velero.io/restore-name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 903b1acf-6b59-4156-87f3-7593d46a1a6f managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"903b1acf-6b59-4156-87f3-7593d46a1a6f"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:47:52Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:48:15Z" name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7-pwtw5 namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7 uid: 903b1acf-6b59-4156-87f3-7593d46a1a6f resourceVersion: "127811" uid: 2e577f38-50b4-41ff-af99-7aab7c1df668 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-m4rm8 namespace: test-oadp-164 uid: 7b22d44d-cca8-4ea5-817f-8760631ed7ad repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-164 snapshotID: 07ee6d59 sourceNamespace: test-oadp-164 uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-09-01T08:48:15Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-09-01T08:48:13Z" STEP: Verify restore mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:48:29.59 STEP: Verify Application restore @ 09/01/25 08:48:29.595 STEP: Verify Application deployment for case mysql-hooks-e2e @ 09/01/25 08:48:29.595 2025/09/01 08:48:29 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:48:35 2025-09-01 08:48:30,970 p=41054 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:48:30,970 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:31,207 p=41054 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:48:31,207 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:31,444 p=41054 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:48:31,444 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:31,682 p=41054 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:48:31,682 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:31,695 p=41054 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:48:31,695 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:31,711 p=41054 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:48:31,711 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:31,722 p=41054 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:48:31,722 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:48:32,011 p=41054 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:48:32,012 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:32,037 p=41054 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:48:32,037 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:32,053 p=41054 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:48:32,053 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:32,055 p=41054 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:48:32,591 p=41054 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:48:32,592 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:33,503 p=41054 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:48:33,503 p=41054 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:33,904 p=41054 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:48:33,904 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:34,410 p=41054 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:48:34,410 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:35,044 p=41054 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:48:35,044 p=41054 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:35,048 p=41054 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:48:35,048 p=41054 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/09/01 08:48:35 stderr: ERROR 1049 (42000): Unknown database 'world' < Exit [It] [tc-id:OADP-164][interop][smoke] MySQL app with Restic @ 09/01/25 08:48:35.259 (2m45.789s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:48:35.259 2025/09/01 08:48:35 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:48:35.259 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:35.259 < Exit [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:35.263 (4ms) > Enter [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:35.263 < Exit [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:35.263 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:35.263 2025/09/01 08:48:35 Cleaning app 2025/09/01 08:48:35 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:48:59 2025-09-01 08:48:36,643 p=41379 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:48:36,644 p=41379 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:36,881 p=41379 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:48:36,881 p=41379 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:37,126 p=41379 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:48:37,127 p=41379 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:37,366 p=41379 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:48:37,367 p=41379 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:37,381 p=41379 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:48:37,381 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:37,398 p=41379 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:48:37,398 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:37,410 p=41379 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:48:37,410 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:48:37,701 p=41379 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:48:37,701 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:37,727 p=41379 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:48:37,727 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:37,743 p=41379 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:48:37,743 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:37,745 p=41379 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:48:38,323 p=41379 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:48:38,323 p=41379 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:48:59,083 p=41379 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-164] *** 2025-09-01 08:48:59,083 p=41379 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:48:59,083 p=41379 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:48:59,354 p=41379 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:48:59,354 p=41379 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:59.404 (24.141s) > Enter [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:59.404 2025/09/01 08:48:59 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:59.404 (0s) > Enter [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:59.404 < Exit [DeferCleanup (Each)] Successful Init hook @ 09/01/25 08:48:59.413 (8ms) • [189.951 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Backup restore tests Application backup [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:48 > Enter [BeforeEach] Backup restore tests @ 09/01/25 08:48:59.413 < Exit [BeforeEach] Backup restore tests @ 09/01/25 08:48:59.421 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:48:59.421 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:48:59.421 (0s) > Enter [It] [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic @ 09/01/25 08:48:59.421 2025/09/01 08:48:59 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 08:48:59.426 2025/09/01 08:48:59 restic 2025/09/01 08:48:59 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "62feb177-7e33-484a-b1fc-2a56c386f3c4", "resourceVersion": "128553", "generation": 1, "creationTimestamp": "2025-09-01T08:48:59Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:48:59Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:48:59.606 2025/09/01 08:48:59 Waiting for velero pod to be running 2025/09/01 08:48:59 pod: velero-5d49bc6f8d-csdbd is not yet running with status: {Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2025-09-01 08:48:59 +0000 UTC }] [] [] [] [] Burstable [] []} 2025/09/01 08:49:04 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:49:04.651 2025/09/01 08:49:04 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 09/01/25 08:49:04.663 2025/09/01 08:49:04 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-1077] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:50:15 2025-09-01 08:49:06,044 p=41604 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:49:06,044 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:06,274 p=41604 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:49:06,274 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:06,508 p=41604 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:49:06,508 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:06,743 p=41604 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:49:06,744 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:06,757 p=41604 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:49:06,757 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:06,773 p=41604 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:49:06,773 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:06,784 p=41604 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:49:06,784 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:49:07,073 p=41604 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:49:07,073 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:07,098 p=41604 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:49:07,098 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:07,114 p=41604 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:49:07,114 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:07,116 p=41604 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:49:07,651 p=41604 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:49:07,651 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:08,395 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-1077] *** 2025-09-01 08:49:08,395 p=41604 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:49:08,395 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:08,759 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-09-01 08:49:08,759 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:09,570 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-09-01 08:49:09,570 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:10,174 p=41604 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-09-01 08:49:15,747 p=41604 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-09-01 08:49:21,347 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-09-01 08:49:21,347 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:49:22,077 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-09-01 08:49:22,077 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:22,394 p=41604 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-09-01 08:49:27,682 p=41604 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-09-01 08:49:33,269 p=41604 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (28 retries left). 2025-09-01 08:49:38,567 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:49:38,567 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:40,479 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-09-01 08:49:40,480 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:43,272 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-09-01 08:49:43,272 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:43,975 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-09-01 08:49:44,945 p=41604 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:49:44,962 p=41604 u=1002790000 n=ansible INFO| Pausing for 30 seconds 2025-09-01 08:50:14,965 p=41604 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-09-01 08:50:14,965 p=41604 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:15,072 p=41604 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:50:15,072 p=41604 u=1002790000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:50:15.118 2025/09/01 08:50:15 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:50:21 2025-09-01 08:50:16,569 p=42174 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:50:16,570 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:16,831 p=42174 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:50:16,831 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:17,087 p=42174 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:50:17,087 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:17,328 p=42174 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:50:17,328 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:17,345 p=42174 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:50:17,345 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:17,365 p=42174 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:50:17,365 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:17,378 p=42174 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:50:17,378 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:50:17,682 p=42174 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:50:17,682 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:17,707 p=42174 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:50:17,707 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:17,724 p=42174 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:50:17,724 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:17,725 p=42174 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:50:18,349 p=42174 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:50:18,349 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:19,551 p=42174 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:50:19,552 p=42174 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:20,072 p=42174 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:50:20,072 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:20,707 p=42174 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:50:20,707 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:21,477 p=42174 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:50:21,477 p=42174 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:21,483 p=42174 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:50:21,483 p=42174 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/09/01 08:50:21 {{ } { } [{{ } {mysql-data test-oadp-1077 69ebb2d4-6290-4339-b9a7-cf15394c55e7 128944 0 2025-09-01 08:49:09 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1756716550 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status} {csi-addons-manager Update v1 2025-09-01 08:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} }]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-69ebb2d4-6290-4339-b9a7-cf15394c55e7 0xc000a1a7d0 0xc000a1a7e0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-1077 03c573b0-212d-4970-927a-58d1167e0290 128928 0 2025-09-01 08:49:09 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1756716549 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:49:09 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-03c573b0-212d-4970-927a-58d1167e0290 0xc000a1a950 0xc000a1a960 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:50:21.577 2025/09/01 08:50:21 Wait until backup mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:50:41 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/09/01 08:50:41 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-09-01T08:50:25Z" generateName: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/backup-name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: ad51b53b-1906-4407-a5df-c082e77002f2 velero.io/pvc-uid: 69ebb2d4-6290-4339-b9a7-cf15394c55e7 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"ad51b53b-1906-4407-a5df-c082e77002f2"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:50:25Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:50:28Z" name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7-hhlll namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 uid: ad51b53b-1906-4407-a5df-c082e77002f2 resourceVersion: "130072" uid: d0822b73-0bc4-4fcb-be0e-d31e5a0aa793 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-w78lx namespace: test-oadp-1077 uid: 4c4c5f29-f7d2-4f89-83ff-c4400413f90d repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-1077 tags: backup: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 backup-uid: ad51b53b-1906-4407-a5df-c082e77002f2 ns: test-oadp-1077 pod: mysql-64c9d6466-w78lx pod-uid: 4c4c5f29-f7d2-4f89-83ff-c4400413f90d pvc-uid: 69ebb2d4-6290-4339-b9a7-cf15394c55e7 volume: mysql-data uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-09-01T08:50:28Z" path: /host_pods/4c4c5f29-f7d2-4f89-83ff-c4400413f90d/volumes/kubernetes.io~csi/pvc-69ebb2d4-6290-4339-b9a7-cf15394c55e7/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: 623d3baa startTimestamp: "2025-09-01T08:50:25Z" 2025/09/01 08:50:41 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-09-01T08:50:25Z" generateName: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/backup-name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: ad51b53b-1906-4407-a5df-c082e77002f2 velero.io/pvc-uid: 03c573b0-212d-4970-927a-58d1167e0290 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"ad51b53b-1906-4407-a5df-c082e77002f2"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:50:25Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:50:34Z" name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7-xl4vc namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 uid: ad51b53b-1906-4407-a5df-c082e77002f2 resourceVersion: "130155" uid: 75463d52-4301-4625-a9a3-4e5c2e8500b5 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-w78lx namespace: test-oadp-1077 uid: 4c4c5f29-f7d2-4f89-83ff-c4400413f90d repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-1077 tags: backup: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 backup-uid: ad51b53b-1906-4407-a5df-c082e77002f2 ns: test-oadp-1077 pod: mysql-64c9d6466-w78lx pod-uid: 4c4c5f29-f7d2-4f89-83ff-c4400413f90d pvc-uid: 03c573b0-212d-4970-927a-58d1167e0290 volume: mysql-data1 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:50:34Z" path: /host_pods/4c4c5f29-f7d2-4f89-83ff-c4400413f90d/volumes/kubernetes.io~csi/pvc-03c573b0-212d-4970-927a-58d1167e0290/mount phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 snapshotID: 4da39d87 startTimestamp: "2025-09-01T08:50:31Z" STEP: Verify backup mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:50:41.611 2025/09/01 08:50:41 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 09/01/25 08:50:41.646 STEP: Cleanup Application for case mysql @ 09/01/25 08:50:41.646 2025/09/01 08:50:41 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:51:06 2025-09-01 08:50:43,506 p=42478 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:50:43,507 p=42478 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:43,848 p=42478 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:50:43,848 p=42478 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:44,184 p=42478 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:50:44,184 p=42478 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:44,504 p=42478 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:50:44,505 p=42478 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:50:44,523 p=42478 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:50:44,523 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:44,546 p=42478 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:50:44,546 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:44,564 p=42478 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:50:44,564 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:50:44,948 p=42478 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:50:44,948 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:44,984 p=42478 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:50:44,984 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:45,009 p=42478 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:50:45,010 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:50:45,012 p=42478 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:50:45,644 p=42478 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:50:45,645 p=42478 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:06,627 p=42478 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** 2025-09-01 08:51:06,627 p=42478 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:51:06,627 p=42478 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:06,875 p=42478 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:51:06,875 p=42478 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/09/01 08:51:06 Creating restore mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 for case mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 STEP: Create restore mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 from backup mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:51:06.916 2025/09/01 08:51:06 Wait until restore mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 is complete restore phase: InProgress restore phase: InProgress restore phase: Completed 2025/09/01 08:51:36 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/09/01 08:51:36 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/09/01 08:51:36 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:51:09Z" generateName: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/pod-uid: 8ae742f1-9d5d-4838-a75d-e309d6a2b1bd velero.io/pvc-uid: 6cdc1fc9-9437-4341-9838-445999d012c5 velero.io/restore-name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 2780187c-db0c-4bc0-b530-bdf30bdc742c managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"2780187c-db0c-4bc0-b530-bdf30bdc742c"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:51:09Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:51:24Z" name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7-9jg4x namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 uid: 2780187c-db0c-4bc0-b530-bdf30bdc742c resourceVersion: "131069" uid: 1b443989-fe07-4433-896c-84d514df1a8d spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-w78lx namespace: test-oadp-1077 uid: 8ae742f1-9d5d-4838-a75d-e309d6a2b1bd repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-1077 snapshotID: 623d3baa sourceNamespace: test-oadp-1077 uploaderType: restic volume: mysql-data status: completionTimestamp: "2025-09-01T08:51:24Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-09-01T08:51:22Z" 2025/09/01 08:51:36 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:51:09Z" generateName: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/pod-uid: 8ae742f1-9d5d-4838-a75d-e309d6a2b1bd velero.io/pvc-uid: 9fdfba13-0c2b-465d-9afa-dc2d2ac09380 velero.io/restore-name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 2780187c-db0c-4bc0-b530-bdf30bdc742c managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"2780187c-db0c-4bc0-b530-bdf30bdc742c"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:51:09Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:51:29Z" name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7-qbnk4 namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7 uid: 2780187c-db0c-4bc0-b530-bdf30bdc742c resourceVersion: "131142" uid: 81b8a55f-33ce-42bf-b235-75483cb13f20 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-w78lx namespace: test-oadp-1077 uid: 8ae742f1-9d5d-4838-a75d-e309d6a2b1bd repoIdentifier: s3:s3-us-east-1.amazonaws.com/ci-op-cl9vhfrj-interopoadp/velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7/restic/test-oadp-1077 snapshotID: 4da39d87 sourceNamespace: test-oadp-1077 uploaderType: restic volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:51:29Z" phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 startTimestamp: "2025-09-01T08:51:27Z" STEP: Verify restore mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:51:36.969 STEP: Verify Application restore @ 09/01/25 08:51:36.973 STEP: Verify Application deployment for case mysql @ 09/01/25 08:51:36.973 2025/09/01 08:51:36 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:51:42 2025-09-01 08:51:38,353 p=42682 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:51:38,353 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:38,586 p=42682 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:51:38,586 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:38,819 p=42682 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:51:38,820 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:39,058 p=42682 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:51:39,058 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:39,071 p=42682 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:51:39,071 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:39,087 p=42682 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:51:39,088 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:39,098 p=42682 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:51:39,099 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:51:39,386 p=42682 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:51:39,386 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:39,413 p=42682 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:51:39,413 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:39,429 p=42682 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:51:39,429 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:39,430 p=42682 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:51:39,964 p=42682 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:51:39,964 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:40,866 p=42682 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:51:40,866 p=42682 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:41,263 p=42682 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:51:41,263 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:41,787 p=42682 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:51:41,787 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:42,431 p=42682 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:51:42,431 p=42682 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:42,435 p=42682 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:51:42,435 p=42682 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-371] [interop] [smoke] MySQL application with Restic @ 09/01/25 08:51:42.475 (2m43.054s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:51:42.475 2025/09/01 08:51:42 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:51:42.475 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:51:42.475 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:51:42.48 (5ms) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:51:42.48 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:51:42.48 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:51:42.48 2025/09/01 08:51:42 Cleaning app 2025/09/01 08:51:42 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:52:06 2025-09-01 08:51:43,871 p=43008 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:51:43,871 p=43008 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:44,104 p=43008 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:51:44,104 p=43008 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:44,336 p=43008 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:51:44,337 p=43008 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:44,571 p=43008 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:51:44,571 p=43008 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:51:44,584 p=43008 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:51:44,584 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:44,601 p=43008 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:51:44,601 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:44,612 p=43008 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:51:44,612 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:51:44,904 p=43008 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:51:44,904 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:44,929 p=43008 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:51:44,929 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:44,945 p=43008 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:51:44,945 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:51:44,947 p=43008 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:51:45,484 p=43008 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:51:45,484 p=43008 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:06,240 p=43008 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-1077] *** 2025-09-01 08:52:06,240 p=43008 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:52:06,240 p=43008 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:06,489 p=43008 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:52:06,490 p=43008 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:52:06.531 (24.051s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:52:06.531 2025/09/01 08:52:06 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:52:06.531 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:52:06.531 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:52:06.542 (11ms) • [187.129 seconds] ------------------------------ Backup restore tests Application backup [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia [mr-check] /alabama/cspi/e2e/app_backup/backup_restore.go:62 > Enter [BeforeEach] Backup restore tests @ 09/01/25 08:52:06.542 < Exit [BeforeEach] Backup restore tests @ 09/01/25 08:52:06.55 (8ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:52:06.55 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:52:06.55 (0s) > Enter [It] [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia @ 09/01/25 08:52:06.55 2025/09/01 08:52:06 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 08:52:06.556 2025/09/01 08:52:06 kopia 2025/09/01 08:52:06 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f8b37a9b-d44f-4f8f-83ed-2ae9d999792c", "resourceVersion": "131776", "generation": 1, "creationTimestamp": "2025-09-01T08:52:06Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:52:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:52:06.634 2025/09/01 08:52:06 Waiting for velero pod to be running 2025/09/01 08:52:06 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:52:06 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "f8b37a9b-d44f-4f8f-83ed-2ae9d999792c", "resourceVersion": "131776", "generation": 1, "creationTimestamp": "2025-09-01T08:52:06Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:52:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "kopia" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:52:11.65 2025/09/01 08:52:11 Checking for correct number of running NodeAgent pods... STEP: Installing application for case mysql @ 09/01/25 08:52:11.664 2025/09/01 08:52:11 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-437-kopia] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Check pod status (30 retries left). FAILED - RETRYING: [localhost]: Check pod status (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** changed: [localhost] FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** changed: [localhost] Pausing for 30 seconds TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025/09/01 08:53:15 2025-09-01 08:52:13,041 p=43234 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:52:13,042 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:13,274 p=43234 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:52:13,274 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:13,507 p=43234 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:52:13,507 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:13,742 p=43234 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:52:13,742 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:13,755 p=43234 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:52:13,755 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:13,771 p=43234 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:52:13,771 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:13,782 p=43234 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:52:13,782 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:52:14,069 p=43234 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:52:14,070 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:14,095 p=43234 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:52:14,095 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:14,111 p=43234 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:52:14,112 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:14,113 p=43234 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:52:14,650 p=43234 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:52:14,650 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:15,396 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check namespace test-oadp-437-kopia] *** 2025-09-01 08:52:15,397 p=43234 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:52:15,397 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:15,744 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create namespace] *** 2025-09-01 08:52:15,744 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:16,592 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Deploy a mysql pod] *** 2025-09-01 08:52:16,593 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:17,202 p=43234 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (30 retries left). 2025-09-01 08:52:22,784 p=43234 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Check pod status (29 retries left). 2025-09-01 08:52:28,394 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check pod status] *** 2025-09-01 08:52:28,394 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:52:29,022 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Copy mysql provision script to pod] *** 2025-09-01 08:52:29,023 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:29,339 p=43234 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (30 retries left). 2025-09-01 08:52:34,661 p=43234 u=1002790000 n=ansible INFO| FAILED - RETRYING: [localhost]: Wait until service ready for connections (29 retries left). 2025-09-01 08:52:39,954 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:52:39,954 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:41,827 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Provision the mysql database] *** 2025-09-01 08:52:41,827 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:44,607 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Add dummy data into mysql-data1 pvc] *** 2025-09-01 08:52:44,608 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:45,271 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Create md5 hashes for the files] *** 2025-09-01 08:52:45,271 p=43234 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:52:45,289 p=43234 u=1002790000 n=ansible INFO| Pausing for 30 seconds 2025-09-01 08:53:15,292 p=43234 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Pause After Create md5 hashes for the files] *** 2025-09-01 08:53:15,292 p=43234 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:15,401 p=43234 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:53:15,401 p=43234 u=1002790000 n=ansible INFO| localhost : ok=25 changed=11 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:53:15.481 2025/09/01 08:53:15 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:53:22 2025-09-01 08:53:17,050 p=43772 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:53:17,050 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:17,302 p=43772 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:53:17,302 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:17,543 p=43772 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:53:17,543 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:17,821 p=43772 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:53:17,822 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:17,842 p=43772 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:53:17,842 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:17,870 p=43772 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:53:17,870 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:17,891 p=43772 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:53:17,891 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:53:18,332 p=43772 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:53:18,332 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:18,367 p=43772 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:53:18,367 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:18,394 p=43772 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:53:18,394 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:18,397 p=43772 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:53:19,081 p=43772 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:53:19,081 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:20,592 p=43772 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:53:20,592 p=43772 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:21,296 p=43772 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:53:21,296 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:21,933 p=43772 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:53:21,934 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:22,820 p=43772 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:53:22,821 p=43772 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:22,826 p=43772 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:53:22,827 p=43772 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 2025/09/01 08:53:22 {{ } { } [{{ } {mysql-data test-oadp-437-kopia 3e1b76be-66f5-4ed0-9286-f090cd89f5b3 132095 0 2025-09-01 08:52:16 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data-1756716736 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-3e1b76be-66f5-4ed0-9286-f090cd89f5b3 0xc00108aa30 0xc00108aa40 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}} {{ } {mysql-data1 test-oadp-437-kopia ed7d8fa2-b258-4b7e-8434-5a4ff96d3424 132098 0 2025-09-01 08:52:16 +0000 UTC map[app:mysql testlabel:selectors testlabel2:foo] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes reclaimspace.csiaddons.openshift.io/cronjob:mysql-data1-1756716736 reclaimspace.csiaddons.openshift.io/schedule:@weekly volume.beta.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com volume.kubernetes.io/storage-provisioner:openshift-storage.rbd.csi.ceph.com] [] [kubernetes.io/pvc-protection] [{OpenAPI-Generator Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:testlabel":{},"f:testlabel2":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}} } {csi-addons-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:reclaimspace.csiaddons.openshift.io/cronjob":{},"f:reclaimspace.csiaddons.openshift.io/schedule":{}}}} } {kube-controller-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/bind-completed":{},"f:pv.kubernetes.io/bound-by-controller":{},"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}},"f:spec":{"f:volumeName":{}}} } {kube-controller-manager Update v1 2025-09-01 08:52:16 +0000 UTC FieldsV1 {"f:status":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:phase":{}}} status}]} {[ReadWriteOnce] nil {map[] map[storage:{{2147483648 0} {} 2Gi BinarySI}]} pvc-ed7d8fa2-b258-4b7e-8434-5a4ff96d3424 0xc00108abb0 0xc00108abc0 nil nil } {Bound [ReadWriteOnce] map[storage:{{2147483648 0} {} 2Gi BinarySI}] [] map[] map[] nil}}]} STEP: Creating backup mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:53:22.951 2025/09/01 08:53:22 Wait until backup mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed 2025/09/01 08:53:42 Verify the PodVolumeBackup is completed successfully and BackupRepository type is matching with DPA.nodeAgent.uploaderType 2025/09/01 08:53:42 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data1 creationTimestamp: "2025-09-01T08:53:26Z" generateName: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/backup-name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 velero.io/pvc-uid: ed7d8fa2-b258-4b7e-8434-5a4ff96d3424 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"6863554a-899b-4a14-b8d3-95e6b341f6c9"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:53:26Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:53:35Z" name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7-jx449 namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 resourceVersion: "133244" uid: b50e023f-bc8f-4daa-b3fe-6db1f90d1ccc spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-sbdz2 namespace: test-oadp-437-kopia uid: 08a5649a-a03b-43f7-baf3-2007a674bea4 repoIdentifier: "" tags: backup: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 backup-uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 ns: test-oadp-437-kopia pod: mysql-64c9d6466-sbdz2 pod-uid: 08a5649a-a03b-43f7-baf3-2007a674bea4 pvc-uid: ed7d8fa2-b258-4b7e-8434-5a4ff96d3424 volume: mysql-data1 uploaderType: kopia volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:53:35Z" path: /host_pods/08a5649a-a03b-43f7-baf3-2007a674bea4/volumes/kubernetes.io~csi/pvc-ed7d8fa2-b258-4b7e-8434-5a4ff96d3424/mount phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 snapshotID: e2d2ca81f594e2c594f5f84c14e2bc74 startTimestamp: "2025-09-01T08:53:32Z" 2025/09/01 08:53:42 apiVersion: velero.io/v1 kind: PodVolumeBackup metadata: annotations: velero.io/pvc-name: mysql-data creationTimestamp: "2025-09-01T08:53:26Z" generateName: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7- generation: 5 labels: velero.io/backup-name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 velero.io/backup-uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 velero.io/pvc-uid: 3e1b76be-66f5-4ed0-9286-f090cd89f5b3 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:velero.io/pvc-name: {} f:generateName: {} f:labels: .: {} f:velero.io/backup-name: {} f:velero.io/backup-uid: {} f:velero.io/pvc-uid: {} f:ownerReferences: .: {} k:{"uid":"6863554a-899b-4a14-b8d3-95e6b341f6c9"}: {} f:spec: .: {} f:backupStorageLocation: {} f:node: {} f:pod: {} f:repoIdentifier: {} f:tags: .: {} f:backup: {} f:backup-uid: {} f:ns: {} f:pod: {} f:pod-uid: {} f:pvc-uid: {} f:volume: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:53:26Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:path: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:snapshotID: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:53:28Z" name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7-tmdpl namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Backup name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 resourceVersion: "133152" uid: 3588a4de-03bb-4fa3-a5db-459ab75dbbc7 spec: backupStorageLocation: ts-dpa-1 node: ip-10-0-99-76.ec2.internal pod: kind: Pod name: mysql-64c9d6466-sbdz2 namespace: test-oadp-437-kopia uid: 08a5649a-a03b-43f7-baf3-2007a674bea4 repoIdentifier: "" tags: backup: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 backup-uid: 6863554a-899b-4a14-b8d3-95e6b341f6c9 ns: test-oadp-437-kopia pod: mysql-64c9d6466-sbdz2 pod-uid: 08a5649a-a03b-43f7-baf3-2007a674bea4 pvc-uid: 3e1b76be-66f5-4ed0-9286-f090cd89f5b3 volume: mysql-data uploaderType: kopia volume: mysql-data status: completionTimestamp: "2025-09-01T08:53:28Z" path: /host_pods/08a5649a-a03b-43f7-baf3-2007a674bea4/volumes/kubernetes.io~csi/pvc-3e1b76be-66f5-4ed0-9286-f090cd89f5b3/mount phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 snapshotID: d9401d1784999b0b872cbf12b3dbb233 startTimestamp: "2025-09-01T08:53:26Z" STEP: Verify backup mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:53:42.978 2025/09/01 08:53:43 Backup for case mysql succeeded STEP: Delete the appplication resources mysql @ 09/01/25 08:53:43.032 STEP: Cleanup Application for case mysql @ 09/01/25 08:53:43.032 2025/09/01 08:53:43 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:54:13 2025-09-01 08:53:45,031 p=44074 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:53:45,032 p=44074 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:45,416 p=44074 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:53:45,416 p=44074 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:45,778 p=44074 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:53:45,778 p=44074 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:46,133 p=44074 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:53:46,134 p=44074 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:53:46,154 p=44074 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:53:46,154 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:46,176 p=44074 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:53:46,176 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:46,192 p=44074 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:53:46,193 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:53:46,604 p=44074 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:53:46,604 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:46,634 p=44074 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:53:46,635 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:46,656 p=44074 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:53:46,657 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:53:46,658 p=44074 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:53:47,358 p=44074 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:53:47,358 p=44074 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:13,416 p=44074 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** 2025-09-01 08:54:13,417 p=44074 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:54:13,417 p=44074 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:13,673 p=44074 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:54:13,673 p=44074 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025/09/01 08:54:13 Creating restore mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 for case mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 STEP: Create restore mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 from backup mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:54:13.714 2025/09/01 08:54:13 Wait until restore mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 is complete restore phase: InProgress restore phase: InProgress restore phase: Completed 2025/09/01 08:54:43 Verify the PodVolumeBackup and PodVolumeRestore count is equal 2025/09/01 08:54:43 Verify the PodVolumeRestore is completed sucessfully and uploaderType is matching 2025/09/01 08:54:43 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:54:16Z" generateName: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/pod-uid: d2454708-be1c-40a2-9703-704ba2ff6a96 velero.io/pvc-uid: b2628173-a069-49c0-9ea4-6eff3afa0072 velero.io/restore-name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 46b48004-0b4d-47bb-9160-8f45002080a4 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"46b48004-0b4d-47bb-9160-8f45002080a4"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:54:16Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:54:30Z" name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7-j7ggr namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 uid: 46b48004-0b4d-47bb-9160-8f45002080a4 resourceVersion: "134238" uid: ff95951f-8c2e-48de-9f2a-0d912e5e9faa spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-sbdz2 namespace: test-oadp-437-kopia uid: d2454708-be1c-40a2-9703-704ba2ff6a96 repoIdentifier: "" snapshotID: e2d2ca81f594e2c594f5f84c14e2bc74 sourceNamespace: test-oadp-437-kopia uploaderType: kopia volume: mysql-data1 status: completionTimestamp: "2025-09-01T08:54:30Z" phase: Completed progress: bytesDone: 104857640 totalBytes: 104857640 startTimestamp: "2025-09-01T08:54:27Z" 2025/09/01 08:54:43 apiVersion: velero.io/v1 kind: PodVolumeRestore metadata: creationTimestamp: "2025-09-01T08:54:16Z" generateName: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7- generation: 4 labels: velero.io/pod-uid: d2454708-be1c-40a2-9703-704ba2ff6a96 velero.io/pvc-uid: 26fe3e95-7081-48bb-a6b3-b832506e0ed6 velero.io/restore-name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 velero.io/restore-uid: 46b48004-0b4d-47bb-9160-8f45002080a4 managedFields: - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:velero.io/pod-uid: {} f:velero.io/pvc-uid: {} f:velero.io/restore-name: {} f:velero.io/restore-uid: {} f:ownerReferences: .: {} k:{"uid":"46b48004-0b4d-47bb-9160-8f45002080a4"}: {} f:spec: .: {} f:backupStorageLocation: {} f:pod: {} f:repoIdentifier: {} f:snapshotID: {} f:sourceNamespace: {} f:uploaderType: {} f:volume: {} f:status: .: {} f:progress: {} manager: velero-server operation: Update time: "2025-09-01T08:54:16Z" - apiVersion: velero.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:completionTimestamp: {} f:phase: {} f:progress: f:bytesDone: {} f:totalBytes: {} f:startTimestamp: {} manager: node-agent-server operation: Update time: "2025-09-01T08:54:34Z" name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7-tn4pr namespace: openshift-adp ownerReferences: - apiVersion: velero.io/v1 controller: true kind: Restore name: mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7 uid: 46b48004-0b4d-47bb-9160-8f45002080a4 resourceVersion: "134303" uid: 302aa10e-ad55-4c3c-a55d-3c160c3776e4 spec: backupStorageLocation: ts-dpa-1 pod: kind: Pod name: mysql-64c9d6466-sbdz2 namespace: test-oadp-437-kopia uid: d2454708-be1c-40a2-9703-704ba2ff6a96 repoIdentifier: "" snapshotID: d9401d1784999b0b872cbf12b3dbb233 sourceNamespace: test-oadp-437-kopia uploaderType: kopia volume: mysql-data status: completionTimestamp: "2025-09-01T08:54:34Z" phase: Completed progress: bytesDone: 107854713 totalBytes: 107854713 startTimestamp: "2025-09-01T08:54:33Z" STEP: Verify restore mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:54:43.798 STEP: Verify Application restore @ 09/01/25 08:54:43.802 STEP: Verify Application deployment for case mysql @ 09/01/25 08:54:43.802 2025/09/01 08:54:43 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** ok: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** changed: [localhost] TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=19  changed=7  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025/09/01 08:54:49 2025-09-01 08:54:45,205 p=44279 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:54:45,205 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:45,436 p=44279 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:54:45,436 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:45,669 p=44279 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:54:45,669 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:45,904 p=44279 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:54:45,904 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:45,917 p=44279 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:54:45,917 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:45,933 p=44279 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:54:45,933 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:45,944 p=44279 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:54:45,945 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:54:46,229 p=44279 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:54:46,229 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:46,254 p=44279 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:54:46,255 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:46,270 p=44279 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:54:46,271 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:46,272 p=44279 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:54:46,801 p=44279 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:54:46,801 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:47,707 p=44279 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Check mysql pod status] *** 2025-09-01 08:54:47,707 p=44279 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:48,098 p=44279 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Wait until service ready for connections] *** 2025-09-01 08:54:48,098 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:48,614 p=44279 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Query the inserted data] *** 2025-09-01 08:54:48,614 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:49,253 p=44279 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Validate test1 file has correct md5 hash] *** 2025-09-01 08:54:49,253 p=44279 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:49,257 p=44279 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:54:49,257 p=44279 u=1002790000 n=ansible INFO| localhost : ok=19 changed=7 unreachable=0 failed=0 skipped=15 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-437][interop][smoke] MySQL application with filesystem, Kopia @ 09/01/25 08:54:49.299 (2m42.748s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:54:49.299 2025/09/01 08:54:49 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:54:49.299 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:54:49.299 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:54:49.304 (5ms) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:54:49.304 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:54:49.304 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:54:49.304 2025/09/01 08:54:49 Cleaning app 2025/09/01 08:54:49 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025/09/01 08:55:18 2025-09-01 08:54:50,682 p=44602 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:54:50,682 p=44602 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:50,917 p=44602 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:54:50,917 p=44602 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:51,158 p=44602 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:54:51,158 p=44602 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:51,391 p=44602 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:54:51,391 p=44602 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:54:51,404 p=44602 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:54:51,404 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:51,420 p=44602 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:54:51,420 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:51,431 p=44602 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:54:51,431 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:54:51,717 p=44602 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:54:51,718 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:51,743 p=44602 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:54:51,744 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:51,760 p=44602 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:54:51,760 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:54:51,761 p=44602 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:54:52,295 p=44602 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:54:52,295 p=44602 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:18,048 p=44602 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-mysql : Remove namespace test-oadp-437-kopia] *** 2025-09-01 08:55:18,048 p=44602 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:55:18,048 p=44602 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:18,298 p=44602 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:55:18,299 p=44602 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:55:18.34 (29.036s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:55:18.34 2025/09/01 08:55:18 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:55:18.34 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:55:18.34 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:55:18.349 (9ms) • [191.807 seconds] ------------------------------ SSSSSSS ------------------------------ Backup restore tests Application backup [tc-id:OADP-97][interop] Empty-project application with Restic /alabama/cspi/e2e/app_backup/backup_restore.go:191 > Enter [BeforeEach] Backup restore tests @ 09/01/25 08:55:18.349 < Exit [BeforeEach] Backup restore tests @ 09/01/25 08:55:18.359 (10ms) > Enter [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:55:18.359 < Exit [JustBeforeEach] TOP-LEVEL @ 09/01/25 08:55:18.359 (0s) > Enter [It] [tc-id:OADP-97][interop] Empty-project application with Restic @ 09/01/25 08:55:18.359 2025/09/01 08:55:18 Delete all downloadrequest No download requests are found STEP: Create DPA CR @ 09/01/25 08:55:18.368 2025/09/01 08:55:18 restic 2025/09/01 08:55:18 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d7ac9d80-586d-4fcd-a9ef-bc12d6a050c2", "resourceVersion": "135059", "generation": 1, "creationTimestamp": "2025-09-01T08:55:18Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:55:18Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } Delete all the backups that remained in the phase InProgress Deleting backup CRs in progress Deletion of backup CRs in progress completed Delete all the restores that remained in the phase InProgress Deleting restore CRs in progress Deletion of restore CRs in progress completed STEP: Verify DPA CR setup @ 09/01/25 08:55:18.434 2025/09/01 08:55:18 Waiting for velero pod to be running 2025/09/01 08:55:18 Wait for DPA status.condition.reason to be 'Completed' and and message to be 'Reconcile complete' 2025/09/01 08:55:18 { "metadata": { "name": "ts-dpa", "namespace": "openshift-adp", "uid": "d7ac9d80-586d-4fcd-a9ef-bc12d6a050c2", "resourceVersion": "135059", "generation": 1, "creationTimestamp": "2025-09-01T08:55:18Z", "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "oadp.openshift.io/v1alpha1", "time": "2025-09-01T08:55:18Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { ".": {}, "f:backupLocations": {}, "f:configuration": { ".": {}, "f:nodeAgent": { ".": {}, "f:enable": {}, "f:podConfig": { ".": {}, "f:resourceAllocations": {} }, "f:uploaderType": {} }, "f:velero": { ".": {}, "f:defaultPlugins": {}, "f:disableFsBackup": {} } }, "f:logFormat": {}, "f:podDnsConfig": {}, "f:snapshotLocations": {} } } } ] }, "spec": { "backupLocations": [ { "velero": { "provider": "aws", "config": { "region": "us-east-1" }, "credential": { "name": "cloud-credentials", "key": "cloud" }, "objectStorage": { "bucket": "ci-op-cl9vhfrj-interopoadp", "prefix": "velero-e2e-26134bd4-870b-11f0-8ef4-0a580a81b6e7" }, "default": true } } ], "snapshotLocations": [], "podDnsConfig": {}, "configuration": { "velero": { "defaultPlugins": [ "openshift", "aws", "kubevirt" ], "disableFsBackup": false }, "nodeAgent": { "enable": true, "podConfig": { "resourceAllocations": {} }, "uploaderType": "restic" } }, "features": null, "logFormat": "text" }, "status": {} } STEP: Prepare backup resources, depending on the volumes backup type @ 09/01/25 08:55:23.465 2025/09/01 08:55:23 Checking for correct number of running NodeAgent pods... STEP: Installing application for case empty-project-e2e @ 09/01/25 08:55:23.487 2025/09/01 08:55:23 Using admin kubeconfig for with_deploy operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Deploy project with labels and selectors] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:55:27 2025-09-01 08:55:24,869 p=44824 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:55:24,869 p=44824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:25,115 p=44824 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:55:25,115 p=44824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:25,357 p=44824 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:55:25,357 p=44824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:25,590 p=44824 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:55:25,590 p=44824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:25,603 p=44824 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:55:25,603 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:25,619 p=44824 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:55:25,619 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:25,630 p=44824 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:55:25,630 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:55:25,914 p=44824 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:55:25,914 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:25,938 p=44824 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:55:25,939 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:25,954 p=44824 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:55:25,954 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:25,956 p=44824 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:55:26,486 p=44824 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:55:26,486 p=44824 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:27,258 p=44824 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Deploy project with labels and selectors] *** 2025-09-01 08:55:27,258 p=44824 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:55:27,258 p=44824 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:27,290 p=44824 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:55:27,290 p=44824 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 STEP: Verify Application deployment @ 09/01/25 08:55:27.33 2025/09/01 08:55:27 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:55:31 2025-09-01 08:55:28,704 p=45036 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:55:28,704 p=45036 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:28,950 p=45036 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:55:28,951 p=45036 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:29,195 p=45036 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:55:29,196 p=45036 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:29,433 p=45036 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:55:29,433 p=45036 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:29,446 p=45036 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:55:29,446 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:29,462 p=45036 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:55:29,462 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:29,473 p=45036 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:55:29,473 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:55:29,756 p=45036 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:55:29,756 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:29,782 p=45036 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:55:29,782 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:29,798 p=45036 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:55:29,798 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:29,800 p=45036 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:55:30,332 p=45036 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:55:30,332 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:31,087 p=45036 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** 2025-09-01 08:55:31,087 p=45036 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:55:31,088 p=45036 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:31,109 p=45036 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:55:31,110 p=45036 u=1002790000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/09/01 08:55:31 {{ } { } []} STEP: Creating backup empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:55:31.157 2025/09/01 08:55:31 Wait until backup empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 is completed backup phase: Completed STEP: Verify backup empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 has completed successfully @ 09/01/25 08:55:51.175 2025/09/01 08:55:51 Backup for case empty-project-e2e succeeded STEP: Delete the appplication resources empty-project-e2e @ 09/01/25 08:55:51.214 STEP: Cleanup Application for case empty-project-e2e @ 09/01/25 08:55:51.214 2025/09/01 08:55:51 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:56:05 2025-09-01 08:55:52,586 p=45249 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:55:52,587 p=45249 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:52,832 p=45249 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:55:52,832 p=45249 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:53,075 p=45249 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:55:53,076 p=45249 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:53,305 p=45249 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:55:53,305 p=45249 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:55:53,319 p=45249 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:55:53,319 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:53,335 p=45249 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:55:53,335 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:53,345 p=45249 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:55:53,346 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:55:53,637 p=45249 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:55:53,638 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:53,662 p=45249 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:55:53,663 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:53,679 p=45249 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:55:53,679 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:55:53,681 p=45249 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:55:54,210 p=45249 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:55:54,210 p=45249 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:04,966 p=45249 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** 2025-09-01 08:56:04,966 p=45249 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:56:04,966 p=45249 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:05,040 p=45249 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:56:05,040 p=45249 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2025/09/01 08:56:05 Creating restore empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 for case empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 STEP: Create restore empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 from backup empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 @ 09/01/25 08:56:05.082 2025/09/01 08:56:05 Wait until restore empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7 is complete restore phase: Completed 2025/09/01 08:56:15 No PodVolumeBackup CR found for the Restore STEP: Verify restore empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7has completed successfully @ 09/01/25 08:56:15.128 STEP: Verify Application restore @ 09/01/25 08:56:15.144 STEP: Verify Application deployment for case empty-project-e2e @ 09/01/25 08:56:15.144 2025/09/01 08:56:15 Using admin kubeconfig for with_validate operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:56:19 2025-09-01 08:56:16,608 p=45462 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:56:16,608 p=45462 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:16,896 p=45462 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:56:16,896 p=45462 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:17,150 p=45462 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:56:17,150 p=45462 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:17,386 p=45462 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:56:17,386 p=45462 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:17,400 p=45462 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:56:17,400 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:17,416 p=45462 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:56:17,417 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:17,427 p=45462 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:56:17,428 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:56:17,721 p=45462 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:56:17,721 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:17,747 p=45462 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:56:17,747 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:17,763 p=45462 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:56:17,763 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:17,764 p=45462 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:56:18,310 p=45462 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:56:18,311 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:19,124 p=45462 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Check project status] *** 2025-09-01 08:56:19,124 p=45462 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:56:19,124 p=45462 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:19,145 p=45462 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:56:19,146 p=45462 u=1002790000 n=ansible INFO| localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [It] [tc-id:OADP-97][interop] Empty-project application with Restic @ 09/01/25 08:56:19.186 (1m0.826s) > Enter [JustAfterEach] TOP-LEVEL @ 09/01/25 08:56:19.186 2025/09/01 08:56:19 Using Must-gather image: registry.redhat.io/oadp/oadp-mustgather-rhel9:1.5.0 < Exit [JustAfterEach] TOP-LEVEL @ 09/01/25 08:56:19.186 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:19.186 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:19.191 (5ms) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:19.191 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:19.191 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:19.191 2025/09/01 08:56:19 Cleaning app 2025/09/01 08:56:19 Using admin kubeconfig for with_cleanup operation: /home/jenkins/.kube/config [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Found variable using reserved name: namespace PLAY [localhost] *************************************************************** TASK [Gathering Facts] ********************************************************* ok: [localhost] TASK [include_vars] ************************************************************ ok: [localhost] TASK [Print admin kubeconfig path] ********************************************* ok: [localhost] => {  "msg": "Admin KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Print user kubeconfig path] ********************************************** ok: [localhost] => {  "msg": "User KUBECONFIG path: /home/jenkins/.kube/config" } TASK [Remove all the contents from the file] *********************************** changed: [localhost] TASK [Get cluster endpoint (from admin kubeconfig)] **************************** changed: [localhost] TASK [Get admin token] ********************************************************* changed: [localhost] TASK [Get user token] ********************************************************** changed: [localhost] TASK [Set core facts (admin + user token)] ************************************* ok: [localhost] TASK [Choose token based on non_admin flag] ************************************ ok: [localhost] TASK [Print token] ************************************************************* ok: [localhost] => {  "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } TASK [Extract Kubernetes minor version from cluster] *************************** ok: [localhost] TASK [Map Kubernetes minor to OCP release] ************************************* ok: [localhost] TASK [set_fact] **************************************************************** ok: [localhost] PLAY [Execute Task] ************************************************************ TASK [Gathering Facts] ********************************************************* ok: [localhost] [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=16  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025/09/01 08:56:33 2025-09-01 08:56:20,620 p=45673 u=1002790000 n=ansible INFO| TASK [Remove all the contents from the file] *********************************** 2025-09-01 08:56:20,620 p=45673 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:20,928 p=45673 u=1002790000 n=ansible INFO| TASK [Get cluster endpoint (from admin kubeconfig)] **************************** 2025-09-01 08:56:20,928 p=45673 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:21,174 p=45673 u=1002790000 n=ansible INFO| TASK [Get admin token] ********************************************************* 2025-09-01 08:56:21,174 p=45673 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:21,435 p=45673 u=1002790000 n=ansible INFO| TASK [Get user token] ********************************************************** 2025-09-01 08:56:21,435 p=45673 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:21,454 p=45673 u=1002790000 n=ansible INFO| TASK [Set core facts (admin + user token)] ************************************* 2025-09-01 08:56:21,454 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:21,472 p=45673 u=1002790000 n=ansible INFO| TASK [Choose token based on non_admin flag] ************************************ 2025-09-01 08:56:21,472 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:21,484 p=45673 u=1002790000 n=ansible INFO| TASK [Print token] ************************************************************* 2025-09-01 08:56:21,484 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] => { "msg": "Token: sha256~49A80XC5yxe4iL0ZC-tjA6XKods1KogF6KBMn6voZ8E" } 2025-09-01 08:56:21,887 p=45673 u=1002790000 n=ansible INFO| TASK [Extract Kubernetes minor version from cluster] *************************** 2025-09-01 08:56:21,888 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:21,922 p=45673 u=1002790000 n=ansible INFO| TASK [Map Kubernetes minor to OCP release] ************************************* 2025-09-01 08:56:21,922 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:21,941 p=45673 u=1002790000 n=ansible INFO| TASK [set_fact] **************************************************************** 2025-09-01 08:56:21,942 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:21,943 p=45673 u=1002790000 n=ansible INFO| PLAY [Execute Task] ************************************************************ 2025-09-01 08:56:22,581 p=45673 u=1002790000 n=ansible INFO| TASK [Gathering Facts] ********************************************************* 2025-09-01 08:56:22,581 p=45673 u=1002790000 n=ansible INFO| ok: [localhost] 2025-09-01 08:56:33,470 p=45673 u=1002790000 n=ansible INFO| TASK [/alabama/cspi/sample-applications/ocpdeployer/ansible/roles/ocp-project : Remove namespace test-oadp-97] *** 2025-09-01 08:56:33,471 p=45673 u=1002790000 n=ansible WARNING| [WARNING]: kubernetes<24.2.0 is not supported or tested. Some features may not work. 2025-09-01 08:56:33,471 p=45673 u=1002790000 n=ansible INFO| changed: [localhost] 2025-09-01 08:56:33,530 p=45673 u=1002790000 n=ansible INFO| PLAY RECAP ********************************************************************* 2025-09-01 08:56:33,530 p=45673 u=1002790000 n=ansible INFO| localhost : ok=16 changed=5 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:33.581 (14.39s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:33.581 2025/09/01 08:56:33 Cleaning setup resources for the backup < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:33.581 (0s) > Enter [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:33.581 < Exit [DeferCleanup (Each)] Application backup @ 09/01/25 08:56:33.59 (9ms) • [75.241 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite]  /alabama/cspi/e2e/e2e_suite_test.go:218 > Enter [SynchronizedAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.59 2025/09/01 08:56:33 Deleting Velero CR < Exit [SynchronizedAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.597 (7ms) > Enter [SynchronizedAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.597 < Exit [SynchronizedAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.598 (0s) [SynchronizedAfterSuite] PASSED [0.007 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo > Enter [ReportAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.598 < Exit [ReportAfterSuite] TOP-LEVEL @ 09/01/25 08:56:33.61 (12ms) [ReportAfterSuite] PASSED [0.012 seconds] ------------------------------ Summarizing 2 Failures: [FAIL] [datamover] DataMover: Backup/Restore stateful application with CSI  [It] [tc-id:OADP-440][interop] Cassandra application /alabama/cspi/test_common/backup_restore_app_case.go:46 [FAIL] Backup hooks tests Pre exec hook [It] [tc-id:OADP-92][interop][smoke] Cassandra app with Restic /alabama/cspi/test_common/backup_restore_app_case.go:46 Ran 8 of 227 Specs in 2753.439 seconds FAIL! -- 6 Passed | 2 Failed | 0 Pending | 219 Skipped --- FAIL: TestOADPE2E (2753.46s) FAIL Ginkgo ran 1 suite in 45m58.940604926s Test Suite Failed [must-gather ] OUT 2025-09-01T08:57:03.700703355Z Using must-gather plug-in image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: e17d3b79-79c5-4c1c-969c-508913f341b5 ClientVersion: 4.17.10 ClusterVersion: Stable at "4.20.0-0.nightly-2025-08-31-160814" ClusterOperators: clusteroperator/operator-lifecycle-manager is not upgradeable because ClusterServiceVersions blocking minor version upgrades to 4.21.0 or higher: - maximum supported OCP version for openshift-storage/odf-dependencies.v4.19.4-rhodf is 4.20 - maximum supported OCP version for openshift-storage/odf-operator.v4.19.4-rhodf is 4.20 [must-gather ] OUT 2025-09-01T08:57:03.78564656Z namespace/openshift-must-gather-lwgjp created [must-gather ] OUT 2025-09-01T08:57:03.796444885Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-kxq65 created Warning: spec.nodeSelector[node-role.kubernetes.io/master]: use "node-role.kubernetes.io/control-plane" instead [must-gather ] OUT 2025-09-01T08:57:03.838573796Z pod for plug-in image registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 created [must-gather-92r5q] POD 2025-09-01T08:57:13.259773075Z volume percentage checker started..... [must-gather-92r5q] POD 2025-09-01T08:57:13.266947742Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:14.199510851Z W0901 08:57:14.199463 3 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ [must-gather-92r5q] POD 2025-09-01T08:57:14.254463629Z W0901 08:57:14.254419 3 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2. [must-gather-92r5q] POD 2025-09-01T08:57:14.638096211Z W0901 08:57:14.638026 3 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice [must-gather-92r5q] POD 2025-09-01T08:57:17.160289641Z W0901 08:57:17.160245 3 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ [must-gather-92r5q] POD 2025-09-01T08:57:17.223549892Z W0901 08:57:17.223513 3 warnings.go:70] kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2. [must-gather-92r5q] POD 2025-09-01T08:57:17.775037388Z W0901 08:57:17.774992 3 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice [must-gather-92r5q] POD 2025-09-01T08:57:18.283679268Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:23.294390773Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:28.282762708Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:57:28.303891950Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:33.482046570Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:38.290742123Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:57:38.493959600Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:43.503126036Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:48.292573740Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7-352a789e-8477-4bd2-b78b-a14648753c1a": context deadline exceeded [must-gather-92r5q] POD 2025-09-01T08:57:48.512481635Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:53.522118665Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:57:58.294485639Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:57:58.531877678Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:03.541267222Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:08.296112138Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:58:08.568948235Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:13.583267767Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:18.298361818Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7-a1494238-afbe-4e34-8b73-88f590d9b2dd": context deadline exceeded [must-gather-92r5q] POD 2025-09-01T08:58:18.592828657Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:23.603087274Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:28.300805653Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:58:28.612321590Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:33.626106773Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:38.305894897Z client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline [must-gather-92r5q] POD 2025-09-01T08:58:38.635970278Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:43.645530287Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:48.307523978Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:58:48.656462347Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:53.666714861Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:58:58.309381363Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:58:58.676636441Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:03.686542811Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:08.311106963Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:59:08.696976513Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:13.707294345Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:18.312253372Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:59:18.717223335Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:23.727501834Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:28.314195251Z Get "https://172.30.0.1:443/apis/velero.io/v1/namespaces/openshift-adp/downloadrequests/ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7-3aff30a8-35e1-4532-b3b7-9b95cc2d7647": context deadline exceeded [must-gather-92r5q] POD 2025-09-01T08:59:28.736640276Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:33.745753059Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:38.316067120Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:59:38.755812073Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:43.765117081Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:48.333212960Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:59:48.774657184Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:53.785478048Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T08:59:58.333675526Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T08:59:58.794946601Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T09:00:03.805093648Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T09:00:08.335857472Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T09:00:08.815222076Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T09:00:13.829089590Z volume usage percentage 0 [must-gather-92r5q] POD 2025-09-01T09:00:18.338076278Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T09:00:18.338076278Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] POD 2025-09-01T09:00:18.338076278Z download request download url timeout, check velero server logs for errors. backup storage location may not be available [must-gather-92r5q] OUT 2025-09-01T09:00:19.400801671Z waiting for gather to complete [must-gather-92r5q] OUT 2025-09-01T09:00:19.405666538Z downloading gather output [must-gather-92r5q] OUT 2025-09-01T09:00:19.713330371Z receiving incremental file list [must-gather-92r5q] OUT 2025-09-01T09:00:19.725346291Z ./ [must-gather-92r5q] OUT 2025-09-01T09:00:19.725443323Z version [must-gather-92r5q] OUT 2025-09-01T09:00:19.743605056Z clusters/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.743620306Z clusters/e17d3b79/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.743655407Z clusters/e17d3b79/event-filter.html [must-gather-92r5q] OUT 2025-09-01T09:00:19.745724638Z clusters/e17d3b79/oadp-must-gather-summary.md [must-gather-92r5q] OUT 2025-09-01T09:00:19.745905802Z clusters/e17d3b79/timestamp [must-gather-92r5q] OUT 2025-09-01T09:00:19.745962633Z clusters/e17d3b79/cluster-scoped-resources/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.745977293Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.745983863Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.746018854Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backuprepositories.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.746146366Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backups.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.746389221Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/backupstoragelocations.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.746555865Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/cloudstorages.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.746677227Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/clusterserviceversions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.749016714Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/datadownloads.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.749217698Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/dataprotectionapplications.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.74983082Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/datauploads.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.749983453Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/deletebackuprequests.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750097555Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/downloadrequests.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750225488Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/podvolumebackups.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750375341Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/podvolumerestores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750529524Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/restores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750771089Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/schedules.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.750978793Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/serverstatusrequests.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.751109076Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/subscriptions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.7518555Z clusters/e17d3b79/cluster-scoped-resources/apiextensions.k8s.io/customresourcedefinitions/volumesnapshotlocations.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.751943372Z clusters/e17d3b79/cluster-scoped-resources/config.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.751995743Z clusters/e17d3b79/cluster-scoped-resources/config.openshift.io/clusterversions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.752064144Z clusters/e17d3b79/cluster-scoped-resources/migrations.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752119706Z clusters/e17d3b79/cluster-scoped-resources/migrations.kubevirt.io/migrationpolicies.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.752181777Z clusters/e17d3b79/cluster-scoped-resources/snapshot.storage.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752191227Z clusters/e17d3b79/cluster-scoped-resources/snapshot.storage.k8s.io/volumesnapshotclasses/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752233048Z clusters/e17d3b79/cluster-scoped-resources/snapshot.storage.k8s.io/volumesnapshotclasses/volumesnapshotclasses.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.75232183Z clusters/e17d3b79/cluster-scoped-resources/storage.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.75232932Z clusters/e17d3b79/cluster-scoped-resources/storage.k8s.io/csidrivers/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752393051Z clusters/e17d3b79/cluster-scoped-resources/storage.k8s.io/csidrivers/csidrivers.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.752477453Z clusters/e17d3b79/cluster-scoped-resources/storage.k8s.io/storageclasses/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752580565Z clusters/e17d3b79/cluster-scoped-resources/storage.k8s.io/storageclasses/storageclasses.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.752680257Z clusters/e17d3b79/namespaces/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752688077Z clusters/e17d3b79/namespaces/openshift-adp/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752750148Z clusters/e17d3b79/namespaces/openshift-adp/openshift-adp.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.75283347Z clusters/e17d3b79/namespaces/openshift-adp/apps.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.752903531Z clusters/e17d3b79/namespaces/openshift-adp/apps.openshift.io/deploymentconfigs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.752970093Z clusters/e17d3b79/namespaces/openshift-adp/apps/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.753053494Z clusters/e17d3b79/namespaces/openshift-adp/apps/daemonsets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753175517Z clusters/e17d3b79/namespaces/openshift-adp/apps/deployments.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753390811Z clusters/e17d3b79/namespaces/openshift-adp/apps/replicasets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753602375Z clusters/e17d3b79/namespaces/openshift-adp/apps/statefulsets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753667007Z clusters/e17d3b79/namespaces/openshift-adp/autoscaling/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.753739668Z clusters/e17d3b79/namespaces/openshift-adp/autoscaling/horizontalpodautoscalers.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753801269Z clusters/e17d3b79/namespaces/openshift-adp/batch/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.753874401Z clusters/e17d3b79/namespaces/openshift-adp/batch/cronjobs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.753989333Z clusters/e17d3b79/namespaces/openshift-adp/batch/jobs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754041294Z clusters/e17d3b79/namespaces/openshift-adp/build.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.754111085Z clusters/e17d3b79/namespaces/openshift-adp/build.openshift.io/buildconfigs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754226958Z clusters/e17d3b79/namespaces/openshift-adp/build.openshift.io/builds.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754289769Z clusters/e17d3b79/namespaces/openshift-adp/cdi.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.754360641Z clusters/e17d3b79/namespaces/openshift-adp/cdi.kubevirt.io/dataimportcrons.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754475763Z clusters/e17d3b79/namespaces/openshift-adp/cdi.kubevirt.io/datasources.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754618026Z clusters/e17d3b79/namespaces/openshift-adp/cdi.kubevirt.io/datavolumes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.754681117Z clusters/e17d3b79/namespaces/openshift-adp/clone.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.754753228Z clusters/e17d3b79/namespaces/openshift-adp/clone.kubevirt.io/virtualmachineclones.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.75481523Z clusters/e17d3b79/namespaces/openshift-adp/core/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.754885811Z clusters/e17d3b79/namespaces/openshift-adp/core/configmaps.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.755040844Z clusters/e17d3b79/namespaces/openshift-adp/core/endpoints.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.755158106Z clusters/e17d3b79/namespaces/openshift-adp/core/events.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.757821299Z clusters/e17d3b79/namespaces/openshift-adp/core/persistentvolumeclaims.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.757960962Z clusters/e17d3b79/namespaces/openshift-adp/core/pods.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758147356Z clusters/e17d3b79/namespaces/openshift-adp/core/replicationcontrollers.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758294259Z clusters/e17d3b79/namespaces/openshift-adp/core/secrets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758491043Z clusters/e17d3b79/namespaces/openshift-adp/core/services.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758619586Z clusters/e17d3b79/namespaces/openshift-adp/discovery.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.758701767Z clusters/e17d3b79/namespaces/openshift-adp/discovery.k8s.io/endpointslices.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758738208Z clusters/e17d3b79/namespaces/openshift-adp/export.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.75883201Z clusters/e17d3b79/namespaces/openshift-adp/export.kubevirt.io/virtualmachineexports.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.758881241Z clusters/e17d3b79/namespaces/openshift-adp/hco.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.758956862Z clusters/e17d3b79/namespaces/openshift-adp/hco.kubevirt.io/hyperconvergeds.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.759027574Z clusters/e17d3b79/namespaces/openshift-adp/image.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.759124976Z clusters/e17d3b79/namespaces/openshift-adp/image.openshift.io/imagestreams.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.759209507Z clusters/e17d3b79/namespaces/openshift-adp/instancetype.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.759295739Z clusters/e17d3b79/namespaces/openshift-adp/instancetype.kubevirt.io/virtualmachineinstancetypes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.759462442Z clusters/e17d3b79/namespaces/openshift-adp/instancetype.kubevirt.io/virtualmachinepreferences.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.759600665Z clusters/e17d3b79/namespaces/openshift-adp/k8s.ovn.org/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.759668036Z clusters/e17d3b79/namespaces/openshift-adp/k8s.ovn.org/egressfirewalls.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.75985291Z clusters/e17d3b79/namespaces/openshift-adp/k8s.ovn.org/egressqoses.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.759918571Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.760028314Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/kubevirts.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.760148986Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancemigrations.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.760261318Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancepresets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76037197Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/virtualmachineinstancereplicasets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.760588495Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/virtualmachineinstances.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.760753508Z clusters/e17d3b79/namespaces/openshift-adp/kubevirt.io/virtualmachines.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76082125Z clusters/e17d3b79/namespaces/openshift-adp/monitoring.coreos.com/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.760916801Z clusters/e17d3b79/namespaces/openshift-adp/monitoring.coreos.com/servicemonitors.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.760975222Z clusters/e17d3b79/namespaces/openshift-adp/networking.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761065004Z clusters/e17d3b79/namespaces/openshift-adp/networking.k8s.io/networkpolicies.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.761132826Z clusters/e17d3b79/namespaces/openshift-adp/operators.coreos.com/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761141496Z clusters/e17d3b79/namespaces/openshift-adp/operators.coreos.com/clusterserviceversions/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761222467Z clusters/e17d3b79/namespaces/openshift-adp/operators.coreos.com/clusterserviceversions/clusterserviceversions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.761545744Z clusters/e17d3b79/namespaces/openshift-adp/operators.coreos.com/subscriptions/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761616015Z clusters/e17d3b79/namespaces/openshift-adp/operators.coreos.com/subscriptions/subscriptions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.761687467Z clusters/e17d3b79/namespaces/openshift-adp/pods/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761695617Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761751828Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/openshift-adp-controller-manager-5c466f74-bfjq7.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.761902891Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761915041Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761931741Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/manager/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.761979393Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/manager/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.762443502Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/manager/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.762582654Z clusters/e17d3b79/namespaces/openshift-adp/pods/openshift-adp-controller-manager-5c466f74-bfjq7/manager/manager/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.762626286Z clusters/e17d3b79/namespaces/openshift-adp/policy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.762699867Z clusters/e17d3b79/namespaces/openshift-adp/policy/poddisruptionbudgets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.762749088Z clusters/e17d3b79/namespaces/openshift-adp/pool.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.762809419Z clusters/e17d3b79/namespaces/openshift-adp/pool.kubevirt.io/virtualmachinepools.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76285662Z clusters/e17d3b79/namespaces/openshift-adp/route.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.762914841Z clusters/e17d3b79/namespaces/openshift-adp/route.openshift.io/routes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.762961402Z clusters/e17d3b79/namespaces/openshift-adp/snapshot.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.763019203Z clusters/e17d3b79/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinerestores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.763118375Z clusters/e17d3b79/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinesnapshotcontents.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.763221687Z clusters/e17d3b79/namespaces/openshift-adp/snapshot.kubevirt.io/virtualmachinesnapshots.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.763263758Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.763271698Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backuprepositories/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.763325929Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backuprepositories/backuprepositories.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.763413531Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.763483433Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/backups.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.763702957Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.763821099Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.763983373Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.764104875Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.764221867Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.76432984Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.764440912Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/backups/describe-todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.764494373Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/datadownloads/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.764584874Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/datadownloads/datadownloads.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.764667696Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/datauploads/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.764731937Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/datauploads/datauploads.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.764803169Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/downloadrequests/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.764878951Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/downloadrequests/downloadrequests.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.765044334Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/podvolumebackups/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.765131025Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/podvolumebackups/podvolumebackups.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.765228727Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/podvolumerestores/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.765292689Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/podvolumerestores/podvolumerestores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76537936Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.765448762Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-empty-project-e2e-62670aa0-8711-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.765603035Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-mysql-261628ad-870b-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.765792629Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-mysql-8089afb3-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.765903931Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-mysql-f01351dc-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766023643Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-mysql-hooks-e2e-0f516fe4-8710-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766129665Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-ocp-datavolume-dd5d7276-8709-11f0-90d3-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766238158Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-2dfd51a0-870a-11f0-90d3-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.76636235Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-68bfc160-8709-11f0-90d3-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766537104Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-ocp-kubevirt-eb7fda34-8708-11f0-90d3-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766695897Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-todolist-backup-b7461ea6-870f-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.76685478Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/describe-todolist-backup-c9de154c-870f-11f0-8ef4-0a580a81b6e7.txt [must-gather-92r5q] OUT 2025-09-01T09:00:19.766990902Z clusters/e17d3b79/namespaces/openshift-adp/velero.io/restores/restores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.767111975Z clusters/e17d3b79/namespaces/openshift-cnv/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.767166166Z clusters/e17d3b79/namespaces/openshift-cnv/openshift-cnv.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.767233407Z clusters/e17d3b79/namespaces/openshift-cnv/apps.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.767285228Z clusters/e17d3b79/namespaces/openshift-cnv/apps.openshift.io/deploymentconfigs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76734667Z clusters/e17d3b79/namespaces/openshift-cnv/apps/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.767407021Z clusters/e17d3b79/namespaces/openshift-cnv/apps/daemonsets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.767630045Z clusters/e17d3b79/namespaces/openshift-cnv/apps/deployments.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.768465032Z clusters/e17d3b79/namespaces/openshift-cnv/apps/replicasets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.76935694Z clusters/e17d3b79/namespaces/openshift-cnv/apps/statefulsets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.769415461Z clusters/e17d3b79/namespaces/openshift-cnv/autoscaling/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.769475842Z clusters/e17d3b79/namespaces/openshift-cnv/autoscaling/horizontalpodautoscalers.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.769553524Z clusters/e17d3b79/namespaces/openshift-cnv/batch/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.769618825Z clusters/e17d3b79/namespaces/openshift-cnv/batch/cronjobs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.769775768Z clusters/e17d3b79/namespaces/openshift-cnv/batch/jobs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.769820359Z clusters/e17d3b79/namespaces/openshift-cnv/build.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.76988325Z clusters/e17d3b79/namespaces/openshift-cnv/build.openshift.io/buildconfigs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.769981192Z clusters/e17d3b79/namespaces/openshift-cnv/build.openshift.io/builds.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.770063564Z clusters/e17d3b79/namespaces/openshift-cnv/cdi.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.770112775Z clusters/e17d3b79/namespaces/openshift-cnv/cdi.kubevirt.io/dataimportcrons.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.770222837Z clusters/e17d3b79/namespaces/openshift-cnv/cdi.kubevirt.io/datasources.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.770321819Z clusters/e17d3b79/namespaces/openshift-cnv/cdi.kubevirt.io/datavolumes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.77036224Z clusters/e17d3b79/namespaces/openshift-cnv/clone.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.770439671Z clusters/e17d3b79/namespaces/openshift-cnv/clone.kubevirt.io/virtualmachineclones.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.770517983Z clusters/e17d3b79/namespaces/openshift-cnv/core/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.770622665Z clusters/e17d3b79/namespaces/openshift-cnv/core/configmaps.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.772552473Z clusters/e17d3b79/namespaces/openshift-cnv/core/endpoints.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.772743797Z clusters/e17d3b79/namespaces/openshift-cnv/core/events.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.774263828Z clusters/e17d3b79/namespaces/openshift-cnv/core/persistentvolumeclaims.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.77436377Z clusters/e17d3b79/namespaces/openshift-cnv/core/pods.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.77585671Z clusters/e17d3b79/namespaces/openshift-cnv/core/replicationcontrollers.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.776001642Z clusters/e17d3b79/namespaces/openshift-cnv/core/secrets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.776491272Z clusters/e17d3b79/namespaces/openshift-cnv/core/services.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.776685926Z clusters/e17d3b79/namespaces/openshift-cnv/discovery.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.776745888Z clusters/e17d3b79/namespaces/openshift-cnv/discovery.k8s.io/endpointslices.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.776928261Z clusters/e17d3b79/namespaces/openshift-cnv/export.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.776987772Z clusters/e17d3b79/namespaces/openshift-cnv/export.kubevirt.io/virtualmachineexports.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777057474Z clusters/e17d3b79/namespaces/openshift-cnv/hco.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.777112895Z clusters/e17d3b79/namespaces/openshift-cnv/hco.kubevirt.io/hyperconvergeds.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777209457Z clusters/e17d3b79/namespaces/openshift-cnv/image.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.777263538Z clusters/e17d3b79/namespaces/openshift-cnv/image.openshift.io/imagestreams.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777316989Z clusters/e17d3b79/namespaces/openshift-cnv/instancetype.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.77736978Z clusters/e17d3b79/namespaces/openshift-cnv/instancetype.kubevirt.io/virtualmachineinstancetypes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777471562Z clusters/e17d3b79/namespaces/openshift-cnv/instancetype.kubevirt.io/virtualmachinepreferences.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777553473Z clusters/e17d3b79/namespaces/openshift-cnv/k8s.ovn.org/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.777637135Z clusters/e17d3b79/namespaces/openshift-cnv/k8s.ovn.org/egressfirewalls.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777780788Z clusters/e17d3b79/namespaces/openshift-cnv/k8s.ovn.org/egressqoses.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.777837469Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.7778983Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/kubevirts.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778050013Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancemigrations.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778158136Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancepresets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778249557Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstancereplicasets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778412611Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/virtualmachineinstances.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778529723Z clusters/e17d3b79/namespaces/openshift-cnv/kubevirt.io/virtualmachines.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778593254Z clusters/e17d3b79/namespaces/openshift-cnv/monitoring.coreos.com/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.778652926Z clusters/e17d3b79/namespaces/openshift-cnv/monitoring.coreos.com/servicemonitors.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778735197Z clusters/e17d3b79/namespaces/openshift-cnv/networking.k8s.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.778788078Z clusters/e17d3b79/namespaces/openshift-cnv/networking.k8s.io/networkpolicies.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.778842029Z clusters/e17d3b79/namespaces/openshift-cnv/operators.coreos.com/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.778849109Z clusters/e17d3b79/namespaces/openshift-cnv/operators.coreos.com/clusterserviceversions/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.77889655Z clusters/e17d3b79/namespaces/openshift-cnv/operators.coreos.com/clusterserviceversions/clusterserviceversions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.779573804Z clusters/e17d3b79/namespaces/openshift-cnv/operators.coreos.com/subscriptions/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.779640025Z clusters/e17d3b79/namespaces/openshift-cnv/operators.coreos.com/subscriptions/subscriptions.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.779703547Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.779712917Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.779770098Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator-6fbb7d69cd-glnzv.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.77987662Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.77988348Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/aaq-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.7798874Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/aaq-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.779946821Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/aaq-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780080594Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/aaq-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780179936Z clusters/e17d3b79/namespaces/openshift-cnv/pods/aaq-operator-6fbb7d69cd-glnzv/aaq-operator/aaq-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780218367Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.780285578Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker-6nxhk.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.780356409Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.78036661Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.78037662Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/bridge-marker/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.780426941Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/bridge-marker/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780652175Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780765448Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-6nxhk/bridge-marker/bridge-marker/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.780800738Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.780863139Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker-r84j5.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.780933541Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.780941361Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.780945071Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/bridge-marker/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781034183Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/bridge-marker/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.781174266Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.781324389Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-r84j5/bridge-marker/bridge-marker/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.78136528Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781426001Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker-sht6v.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.781492952Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781523273Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/bridge-marker/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781531183Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/bridge-marker/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781584284Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/bridge-marker/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.781682376Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/bridge-marker/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.781786398Z clusters/e17d3b79/namespaces/openshift-cnv/pods/bridge-marker-sht6v/bridge-marker/bridge-marker/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.781830339Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.78187878Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver-6f8b75499c-t97pz.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.781966862Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781974522Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/cdi-apiserver/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.781979012Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/cdi-apiserver/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.782036603Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/cdi-apiserver/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.782165426Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/cdi-apiserver/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.782261347Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-apiserver-6f8b75499c-t97pz/cdi-apiserver/cdi-apiserver/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.782303778Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.782363549Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment-6dffc86989-l4pxv.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.782447201Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.782456011Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/cdi-deployment/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.782459622Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/cdi-deployment/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.782546093Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/cdi-deployment/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.800671685Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/cdi-deployment/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.800780537Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-deployment-6dffc86989-l4pxv/cdi-deployment/cdi-deployment/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.800816888Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.800881139Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator-945765b7b-7czvz.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.801030862Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.801043342Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/cdi-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.801048093Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/cdi-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.801095334Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/cdi-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.801997672Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/cdi-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.802098224Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-operator-945765b7b-7czvz/cdi-operator/cdi-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.802134334Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.802200006Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy-5494fb6f58-jrgq6.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.802277667Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.802289007Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/cdi-uploadproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.802295838Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/cdi-uploadproxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.802378629Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/cdi-uploadproxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.802550193Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/cdi-uploadproxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.802660045Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cdi-uploadproxy-5494fb6f58-jrgq6/cdi-uploadproxy/cdi-uploadproxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.802685405Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.802777167Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator-67d7bf8dbf-77zvj.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.803184325Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.803193885Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/cluster-network-addons-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.803197925Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/cluster-network-addons-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.803275587Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/cluster-network-addons-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.805046893Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/cluster-network-addons-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.805160575Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/cluster-network-addons-operator/cluster-network-addons-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.805180065Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805184725Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/kube-rbac-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805191145Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/kube-rbac-proxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805270897Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/kube-rbac-proxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.80539872Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/kube-rbac-proxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.805514972Z clusters/e17d3b79/namespaces/openshift-cnv/pods/cluster-network-addons-operator-67d7bf8dbf-77zvj/kube-rbac-proxy/kube-rbac-proxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.805563703Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805635204Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hco-operator-5fb65f74f6-v9trq.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.805764647Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805772667Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/hyperconverged-cluster-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.805870159Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.806028942Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813014692Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813112954Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-operator-5fb65f74f6-v9trq/hyperconverged-cluster-operator/hyperconverged-cluster-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813165575Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.813212476Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hco-webhook-7d6cf5d865-2czqq.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.813336558Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.813348348Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.813352948Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.813401839Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813591603Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813693045Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hco-webhook-7d6cf5d865-2czqq/hyperconverged-cluster-webhook/hyperconverged-cluster-webhook/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.813727226Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.813821888Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator-54d944895b-bx66w.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.814177485Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814187955Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/hostpath-provisioner-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814192045Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814261986Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.814394319Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.814497531Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hostpath-provisioner-operator-54d944895b-bx66w/hostpath-provisioner-operator/hostpath-provisioner-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.814584813Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814640724Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.814737336Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814745786Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/server/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814749386Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/server/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.814834888Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/server/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.814973381Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/server/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.815064433Z clusters/e17d3b79/namespaces/openshift-cnv/pods/hyperconverged-cluster-cli-download-7ffc87dffd-qnxtl/server/server/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.815111543Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815170075Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/kube-cni-linux-bridge-plugin-9v2zq.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.815255326Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815264557Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815268717Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/cni-plugins/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815323178Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/cni-plugins/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.8154461Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.815573563Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-9v2zq/cni-plugins/cni-plugins/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.815604043Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815670565Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/kube-cni-linux-bridge-plugin-dnqvj.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.815744836Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815754856Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815758666Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/cni-plugins/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.815816697Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/cni-plugins/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.81593782Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.816029372Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-dnqvj/cni-plugins/cni-plugins/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.816074173Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.816137934Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/kube-cni-linux-bridge-plugin-sbfgc.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.816233536Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.816241616Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/cni-plugins/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.816245216Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/cni-plugins/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.816391029Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/cni-plugins/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.816664944Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/cni-plugins/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.816772176Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kube-cni-linux-bridge-plugin-sbfgc/cni-plugins/cni-plugins/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.816808297Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.816869598Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/kubemacpool-cert-manager-547c58c8dc-f7ckj.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.81694923Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81695667Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81696062Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/manager/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.817038932Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/manager/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.817262356Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/manager/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.817363719Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-cert-manager-547c58c8dc-f7ckj/manager/manager/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.817400789Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81746269Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kubemacpool-mac-controller-manager-cfd4f668b-97w2v.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.817583203Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.817593833Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/kube-rbac-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.817597663Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/kube-rbac-proxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.817659064Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/kube-rbac-proxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.817781487Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/kube-rbac-proxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.817882609Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/kube-rbac-proxy/kube-rbac-proxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.81791851Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81793209Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/manager/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81793576Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/manager/manager/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.817995521Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/manager/manager/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.818315047Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubemacpool-mac-controller-manager-cfd4f668b-97w2v/manager/manager/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.818386859Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81844281Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy-f9b7fff64-bccxf.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.818544042Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.818554872Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.818558392Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.818620773Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.818741416Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.818842298Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-bccxf/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.818878599Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81893929Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy-f9b7fff64-zwwjp.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.819008791Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819016731Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819022432Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819087743Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819206485Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819307547Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-apiserver-proxy-f9b7fff64-zwwjp/kubevirt-apiserver-proxy/kubevirt-apiserver-proxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819344578Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819406159Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin-6b75855886-vjqg2.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.819482191Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819493131Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/kubevirt-console-plugin/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819512131Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/kubevirt-console-plugin/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819576583Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/kubevirt-console-plugin/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819690405Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819783007Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-vjqg2/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.819825857Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819886379Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin-6b75855886-x57hs.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.81996242Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.819972111Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/kubevirt-console-plugin/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.81997792Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/kubevirt-console-plugin/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.820033162Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/kubevirt-console-plugin/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820148744Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820247956Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-console-plugin-6b75855886-x57hs/kubevirt-console-plugin/kubevirt-console-plugin/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820289767Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.820352928Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.82043017Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.82043708Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.82044092Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/manager/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.820520222Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/manager/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820676974Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/manager/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820781707Z clusters/e17d3b79/namespaces/openshift-cnv/pods/kubevirt-ipam-controller-manager-6c9bcccdbd-hjkmx/manager/manager/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.820818527Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.820878298Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/ssp-operator-6c54766c5b-jmvnn.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.820979551Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.820986511Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/manager/manager/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.821002591Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/manager/manager/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.821068692Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/manager/manager/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.822627524Z clusters/e17d3b79/namespaces/openshift-cnv/pods/ssp-operator-6c54766c5b-jmvnn/manager/manager/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.822741416Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.822803117Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api-96bcd44fc-2wvkc.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.822893369Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.822905119Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/virt-api/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.822910199Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/virt-api/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.82296348Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/virt-api/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.826385648Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/virt-api/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.826492541Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-2wvkc/virt-api/virt-api/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.826544172Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.826610823Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api-96bcd44fc-k8g2x.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.826698355Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.826708915Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/virt-api/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.826712755Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/virt-api/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.826767556Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/virt-api/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830144064Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/virt-api/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830246006Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-api-96bcd44fc-k8g2x/virt-api/virt-api/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830285597Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.830342238Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller-5c6f684699-k8cz6.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.830429099Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.830444689Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/virt-controller/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83044977Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/virt-controller/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.830516471Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/virt-controller/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830690645Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/virt-controller/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830791896Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-k8cz6/virt-controller/virt-controller/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.830828497Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.830892329Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller-5c6f684699-mff6w.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.8309725Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83098034Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/virt-controller/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83098659Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/virt-controller/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.831043992Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/virt-controller/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.831550012Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/virt-controller/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.831648264Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-controller-5c6f684699-mff6w/virt-controller/virt-controller/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.831692205Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.831743726Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/virt-exportproxy-97cfb96bd-b85dz.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.831834187Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.831843017Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/exportproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.831850278Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/exportproxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.831904779Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/exportproxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832025171Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/exportproxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832128223Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-b85dz/exportproxy/exportproxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832167904Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832225855Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/virt-exportproxy-97cfb96bd-c8cc2.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.832304207Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832315107Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/exportproxy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832325677Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/exportproxy/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832380738Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/exportproxy/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832540142Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/exportproxy/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832640904Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-exportproxy-97cfb96bd-c8cc2/exportproxy/exportproxy/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.832683234Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832738955Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler-gw7g9.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.832838017Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832845268Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832849128Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/virt-handler/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.832907829Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/virt-handler/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.833155394Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.833248225Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-handler/virt-handler/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.833284786Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833292886Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833296566Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/virt-launcher/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833367198Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/virt-launcher/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.83348432Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.833646204Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-gw7g9/virt-launcher/virt-launcher/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.833694985Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833756646Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler-jg4pr.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.833851077Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833861638Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833867388Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/virt-handler/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.833920559Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/virt-handler/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.834157824Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.834251045Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-handler/virt-handler/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.834292046Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834299937Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834303677Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/virt-launcher/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834355288Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/virt-launcher/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.83448286Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.834605503Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-jg4pr/virt-launcher/virt-launcher/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.834643073Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834702985Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler-sls6b.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.834789796Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834797276Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/virt-handler/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834801907Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/virt-handler/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.834861618Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/virt-handler/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835426819Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/virt-handler/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835551051Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-handler/virt-handler/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835589852Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.835595912Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/virt-launcher/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.835599452Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/virt-launcher/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.835664514Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/virt-launcher/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835782816Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/virt-launcher/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835882758Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-handler-sls6b/virt-launcher/virt-launcher/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.835920079Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83598756Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator-684b76dd8-7n5bm.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.836093272Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.836101682Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/virt-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.836105482Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/virt-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.836168834Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/virt-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.8374653Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/virt-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.837614683Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-7n5bm/virt-operator/virt-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.837654433Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.837712985Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator-684b76dd8-spc2n.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.837817107Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.837824797Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/virt-operator/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.837828437Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/virt-operator/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.837884378Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/virt-operator/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838059572Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/virt-operator/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838164454Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-operator-684b76dd8-spc2n/virt-operator/virt-operator/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838197024Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.838262396Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/virt-template-validator-57d89bf8bd-l5xch.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.838338847Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.838345807Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.838349507Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/webhook/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.838411019Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/webhook/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838667634Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/webhook/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838775996Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-l5xch/webhook/webhook/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.838814127Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.838875908Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/virt-template-validator-57d89bf8bd-q755w.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.83895503Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83896312Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/webhook/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.8389666Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/webhook/logs/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.839026351Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/webhook/logs/current.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.839146893Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/webhook/logs/previous.insecure.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.839247635Z clusters/e17d3b79/namespaces/openshift-cnv/pods/virt-template-validator-57d89bf8bd-q755w/webhook/webhook/logs/previous.log [must-gather-92r5q] OUT 2025-09-01T09:00:19.839281886Z clusters/e17d3b79/namespaces/openshift-cnv/policy/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.839341477Z clusters/e17d3b79/namespaces/openshift-cnv/policy/poddisruptionbudgets.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.839411539Z clusters/e17d3b79/namespaces/openshift-cnv/pool.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.83946334Z clusters/e17d3b79/namespaces/openshift-cnv/pool.kubevirt.io/virtualmachinepools.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.839539461Z clusters/e17d3b79/namespaces/openshift-cnv/route.openshift.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.839592292Z clusters/e17d3b79/namespaces/openshift-cnv/route.openshift.io/routes.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.839685244Z clusters/e17d3b79/namespaces/openshift-cnv/snapshot.kubevirt.io/ [must-gather-92r5q] OUT 2025-09-01T09:00:19.839743015Z clusters/e17d3b79/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinerestores.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.839843327Z clusters/e17d3b79/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinesnapshotcontents.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.839945519Z clusters/e17d3b79/namespaces/openshift-cnv/snapshot.kubevirt.io/virtualmachinesnapshots.yaml [must-gather-92r5q] OUT 2025-09-01T09:00:19.845893858Z [must-gather-92r5q] OUT 2025-09-01T09:00:19.845911729Z sent 6,992 bytes received 1,669,024 bytes 3,352,032.00 bytes/sec [must-gather-92r5q] OUT 2025-09-01T09:00:19.845916029Z total size is 23,507,235 speedup is 14.03 [must-gather ] OUT 2025-09-01T09:00:20.024462363Z namespace/openshift-must-gather-lwgjp deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: e17d3b79-79c5-4c1c-969c-508913f341b5 ClientVersion: 4.17.10 ClusterVersion: Stable at "4.20.0-0.nightly-2025-08-31-160814" ClusterOperators: clusteroperator/operator-lifecycle-manager is not upgradeable because ClusterServiceVersions blocking minor version upgrades to 4.21.0 or higher: - maximum supported OCP version for openshift-storage/odf-dependencies.v4.19.4-rhodf is 4.20 - maximum supported OCP version for openshift-storage/odf-operator.v4.19.4-rhodf is 4.20 Checking for additional logs in /alabama/cspi/e2e/logs Copying /alabama/cspi/e2e/logs to /logs/artifacts... It_Backup_hooks_tests_Pre_exec_hook_tc-id_OADP-92_interop_smoke_Cassandra_app_with_Restic It_datamover_DataMover_Backup_Restore_stateful_application_with_CSI_tc-id_OADP-440_interop_Cassandra_application artifacts