-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
4.18, 4.19
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
Rejected
-
Mewtwo Sprint 273
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
During live upgrade testing from 4.18.2 to 4.18.3 ODF we see our upgrade is stuck with two ocs-client-operator pods stuck in Init:0/1 ocs-client-operator-console-7ff74f59b8-cjvph 0/1 Init:0/1 0 2m59s ocs-client-operator-controller-manager-6c86c9f96f-24g88 0/2 Init:0/1 0 2m59s
But after discussion with rhn-support-lgangava (ODF dev) , he thinks that this is more OLM issue based on comment:
the issue is at OLM, we don't know why OLM didn't scale the deployment even when we have set the replicas to 0 in CSV, let me pull the relevant yaml files from MG if possible. above is OCP MG, in ODF MG you can see the replicas of client csv is set to 0 but the deployment wasn't scaled down to 0 by OLM, it was trying to get the deployment in ready state which will never happen and so deployment status was moved to Timeout and olm logged exceeded its progress deadline
Version-Release number of selected component (if applicable):
OCP 4.18.11 but also seen in OCP 4.19.0-0.nightly-2025-04-04-023411 ODF upgrade from 4.18.2 -> 4.18.3
How reproducible:
Install ODF 4.18.2 and upgrade to 4.18.3
Steps to Reproduce:
1. Install ODF 4.18.2
2. and upgrade to 4.18.3
3. not always reproducible - maybe 1 out of 10-20 attempts
Actual results:
ocs-client-operator are not scaled down and success ODF upgrade
Expected results:
ocs-client-operator pods to be scaled down and have successful ODF upgrade
Additional info:
Must gather logs from ODF 4.18.2 to 4.18.3 upgrade:
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j228vu1cs33luma/j228vu1cs33luma_20250514T021046/logs/failed_testcase_ocs_logs_1747294010979/test_upgrade_ocs_logs/j228vu1cs33luma/