-
Bug
-
Resolution: Won't Do
-
Normal
-
None
-
premerge, 4.20.0
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
I've noticed this be an occassional flake for our SNO jobs. As I dug deeper into the issue, I discovered a link between this test failure and high memory utilization on the control-plane. I am wondering if ingress is just more sensitive to memory utilization, or if it's just the first thing to fall on the sword in these cases.
Some sample failing jobs that exhibit this:
- https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_cluster-monitoring-operator/2616/pull-ci-openshift-cluster-monitoring-operator-main-e2e-aws-ovn-single-node/1943208677741694976
- https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_cluster-etcd-operator/1425/pull-ci-openshift-cluster-etcd-operator-main-e2e-aws-ovn-single-node/1942649849195270144
- https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/29965/pull-ci-openshift-origin-main-e2e-aws-ovn-single-node-upgrade/1942293821450620928
Full context is available here: https://redhat-internal.slack.com/archives/C018KQE33MF/p1752164604853579
–
Auto-generated
The following test is failing more than expected:
[sig-arch] events should not repeat pathologically for ns/openshift-ingress
See the sippy test details for additional context.