-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
4.21
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
co/network reported Progressing=True only for a node reboot which should not happened during a normal cluster upgrade.
Sep 26 02:18:34.932 W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes)
Sep 26 02:18:34.932 - 172s W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes)
Sep 26 02:21:39.722 W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 1 nodes)
Sep 26 02:21:39.722 - 7s W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 1 nodes)
Sep 26 02:25:17.041 W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-network-operator/iptables-alerter" is not available (awaiting 1 nodes)
Sep 26 02:25:17.041 - 112s W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-network-operator/iptables-alerter" is not available (awaiting 1 nodes)
Sep 26 02:27:50.325 W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready
Sep 26 02:27:50.325 - 1s W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready
Sep 26 02:29:03.024 W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
Sep 26 02:29:03.024 - 83s W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
Sep 26 02:30:30.476 W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 1 nodes)
Sep 26 02:30:30.476 - 1s W clusteroperator/network condition/Progressing reason/Deploying status/True Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 1 nodes)
Sep 26 02:32:28.292 W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
Sep 26 02:32:28.292 - 93s W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
Sep 26 02:37:59.474 W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)
Sep 26 02:37:59.474 - 89s W clusteroperator/network condition/Progressing reason/Deploying status/True DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)
{code:none}
The example job does an upgrade from 4.20.0-0.ci-2025-09-23-172020 to 4.21.0-0.ci-2025-09-26-000607
How reproducible:
Seems always for 4.20 - 4.21 upgrades
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
co does not go Progressing=True only for a node reboot
Additional info:
A slack conversation on this bug.
Found a job failing with another reason.
: [Monitor:legacy-cvo-invariants][bz-Networking] clusteroperator/network should stay Progressing=False while MCO is Progressing=True expand_less1h11m11s{ 4 (out of 22) unexpected clusteroperator state transitions while machine-config is progressing during the upgrade window from 2025-11-11T00:17:47Z to 2025-11-11T01:28:58Z. These did not match any known exceptions, so they cause this test-case to fail:
Nov 11 01:03:38.085 W clusteroperator/network condition/Progressing reason/MachineConfig status/True worker machine config pool in progressing state
Nov 11 01:03:38.085 - 1137s W clusteroperator/network condition/Progressing reason/MachineConfig status/True worker machine config pool in progressing state
Nov 11 01:22:35.525 W clusteroperator/network condition/Progressing reason/MachineConfig status/True master machine config pool in progressing state
Nov 11 01:22:35.525 - 343s W clusteroperator/network condition/Progressing reason/MachineConfig status/True master machine config pool in progressing state
0 unwelcome but acceptable clusteroperator state transitions while machine-config is progressing during the upgrade window from 2025-11-11T00:17:47Z to 2025-11-11T01:28:58Z, as desired.}
- relates to
-
OTA-1637 Fail CI if any Cluster Operator reports Progressing=True only up to cluster scaling or a node rebooting
-
- Closed
-
- links to