-
Bug
-
Resolution: Done
-
Undefined
-
None
-
4.22
-
None
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Sippy AI-assisted description; please review details for accuracy.
Filed from: Test Regression Details
Test Name
[sig-node] [FeatureGate:ImageVolume] ImageVolume should report kubelet image volume metrics correctly [OCP-84149] [Suite:openshift/conformance/parallel]
Brief Overview
This test has
a 83.02% pass rate, but 95.00% is required.
Statistics Section
Release: 4.22
Time Period: 2026-01-29T00:00:00Z to 2026-02-05T20:00:00Z
Success Rate: 83.02%
Successes: 32
Failures: 9
Flakes: 12
This is a new test in this release and must pass at a 95% success threshold, rather than being compared to historical data.
Sample Failure Outputs
* Job Run: periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade (ID: 2018935663788822528) - time="2026-02-04T07:32:09Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:eaedea11aa namespace:openshift-etcd node:ip-10-0-118-41.ec2.internal pod:etcd-ip-10-0-118-41.ec2.internal]}" message="{ProbeError Readiness probe error: Get \"https://10.0.118.41:9980/readyz\": dial tcp 10.0.118.41:9980: connect: connection refused\nbody: \n map[firstTimestamp:2026-02-04T07:32:09Z lastTimestamp:2026-02-04T07:32:09Z reason:ProbeError]}" - time="2026-02-04T07:32:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:a2c782aea6 namespace:openshift-etcd node:ip-10-0-118-41.ec2.internal pod:etcd-ip-10-0-118-41.ec2.internal]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.118.41:9980/readyz\": dial tcp 10.0.118.41:9980: connect: connection refused map[firstTimestamp:2026-02-04T07:32:09Z lastTimestamp:2026-02-04T07:32:09Z reason:Unhealthy]}" - time="2026-02-04T07:32:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-118-41.ec2.internal\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-118-41.ec2.internal pod/etcd-ip-10-0-118-41.ec2.internal uid/e049a393-7384-4c27-962d-e7886fbc5f89 container/etcd mirror-uid/b98f13c64ca646b7cc2a28a951139a0a" * Job Run: periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade (ID: 2018935658688548864) - time="2026-02-04T07:33:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-124-7.ec2.internal\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-124-7.ec2.internal pod/etcd-ip-10-0-124-7.ec2.internal uid/ea82f2b7-a930-4e89-9c1a-d7a66284964d container/etcd mirror-uid/56bb885a2792f65fe2a035e5f88a8853" - time="2026-02-04T07:33:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-124-7.ec2.internal\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-124-7.ec2.internal pod/etcd-ip-10-0-124-7.ec2.internal uid/011a6edd-e239-43cc-aff3-e60800dde692 container/etcd mirror-uid/aa2ced5fedb24a821ee58acfab6820b7" * Job Run: periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade (ID: 2019057794946174976) - time="2026-02-04T15:53:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-23-51.us-west-1.compute.internal\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-23-51.us-west-1.compute.internal pod/etcd-ip-10-0-23-51.us-west-1.compute.internal uid/c1e72271-c46c-4eee-a3d2-41b72db3d5ed container/etcd mirror-uid/21a5c0ee60716587327e86f5772f9771" - time="2026-02-04T15:53:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-23-51.us-west-1.compute.internal\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-23-51.us-west-1.compute.internal pod/etcd-ip-10-0-23-51.us-west-1.compute.internal uid/2d818b7c-0efa-42c3-885c-175ebea4187d container/etcd mirror-uid/578aea18ac8a0ae842fcfc483934fc71" * Job Run: periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade (ID: 2019057778986848256) - time="2026-02-04T16:01:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-28-172.us-west-2.compute.internal\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-28-172.us-west-2.compute.internal pod/etcd-ip-10-0-28-172.us-west-2.compute.internal uid/65d3d277-6ad9-4d23-b967-cb9cf8d3d0fa container/etcd mirror-uid/c37617b132e407a20dd4e4479373195e" * Job Run: periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade (ID: 2019221856837439488) - time="2026-02-05T02:41:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-117-101.ec2.internal\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ip-10-0-117-101.ec2.internal pod/etcd-ip-10-0-117-101.ec2.internal uid/8ca75df6-0072-44a8-8a88-0c6c5b502926 container/etcd mirror-uid/2da1ecbf1a3e64894537f7d4be33a51b"
Links to Relevant Jobs
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
- periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
Patterns and Insights
The test "[sig-node] [FeatureGate:ImageVolume] ImageVolume should report kubelet image volume metrics correctly [OCP-84149] [Suite:openshift/conformance/parallel]" is currently regressed, showing a pass rate of 83.02% against a required 95%. This is a new test in the 4.22 release.
Analysis of the failed job logs indicates a recurring pattern related to etcd pod availability and readiness probes. Common error messages include:
- "Readiness probe error: Get \"https://10.0.118.41:9980/readyz\": dial tcp 10.0.118.41:9980: connect: connection refused"
- "pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-X-Y.ec2.internal\" is not available"
- "pod logged an error: container \"etcd\" in pod \"etcd-ip-10-0-X-Y.ec2.internal\" is waiting to start: PodInitializing"
These errors suggest that etcd pods are either failing to start, becoming unavailable, or their readiness probes are consistently failing due to connection issues. This could point to underlying networking problems, resource contention, or issues with the etcd deployment itself, leading to the test failures. The failures appear to be consistent across the observed job runs.
Filed by: sdodson@redhat.com
- duplicates
-
TRT-2540 Nightly blocked on single node job failing FeatureGate:ImageVolume test
-
- Review
-