-
Story
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
None
-
False
-
-
False
-
None
-
None
-
None
-
None
We want to store analysis as a spyglass artifact, but sippy does not import job runs until later, possibly up to an hour or more.
How could sippy analyze a job we do not yet know about?
One option would be to push the test failures up to sippy, re-use the same API just with a slightly different entry point, and return the result.
Another option would be pushing up the results and storing the job officially in sippy db, from prow steps.
Design Notes:
- origin options_monitor_events.go we write job run data
- here we could write out a file with just the failed test names, or if acceptable, we could actually make the call to sippy and write out the risk level
- many jobs call openshift-tests twice, which would mean two sippy risk files that have to be merged. this is sub-optimal, probably best to write a simple json list of failed tests, and use a separate workflow post step to submit to sippy and publish the risk analysis artifact.
- perhaps only do so if SIPPY_RISK_ANALYSIS env var is 1
- alternatively we can write just the test names and do the sippy call in a post step
- event intervals has all the e2e test data, example:
{
"level": "Info",
"locator": "e2e-test/\"[sig-arch][Early] Managed cluster should [apigroup:config.openshift.io] start all core operators [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\"",
"message": "finishedStatus/Passed",
"from": "2022-10-11T07:44:23Z",
"to": "2022-10-11T07:44:23Z"
},
- is depended on by
-
TRT-602 Store sippy failure risk analysis in job artifacts
-
- Closed
-
- links to