-
Story
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
Future Sustainability
-
None
-
None
-
None
-
None
-
None
-
None
time="2025-01-23T11:06:42.94Z" level=info msg="refreshing materialized view" matview=prow_test_report_7d_matview
time="2025-01-23T11:06:42.94Z" level=info msg="refreshing materialized view" matview=prow_test_report_2d_matview
time="2025-01-23T12:25:01.76Z" level=info msg="refreshed materialized view concurrently" elapsed=1h18m18.820190633s matview=prow_test_report_2d_matview
time="2025-01-23T12:25:01.76Z" level=info msg="refreshing materialized view" matview=prow_job_runs_report_matview
time="2025-01-23T12:29:15.6Z" level=info msg="refreshed materialized view concurrently" elapsed=1h22m32.660132526s matview=prow_test_report_7d_matview
This has skyrocketed, these were under 20 mins recently.
130k tests in the db now, 45k are "Run multi-stage" which we thought we were actually ignoring. There are several others producing random test names,
name | resource controlplane.operator.openshift.io.podnetworkconnectivitychecks/network-check-source-ip-10-0-119-249-to-network-check-target-ip-10-0-126-232 -n openshif
t-network-diagnostics has been updated too often
name | user ip-10-0-21-189/ovnkube@038aea1608e0 (linux/amd64) kubernetes/v0.31.1 must not produce too many apiserver handler panics
Probably many others.
We need ignores, cleanup, and TRT-1825.
- relates to
-
TRT-1984 Reconsider Test Report / Analysis Design
-
- Closed
-