-
Story
-
Resolution: Obsolete
-
Normal
-
None
-
None
-
False
-
None
-
False
Pitched by David Eads, could we come up with a way to run fewer informing jobs regularly on payload jobs (which almost no one looks at), but hit sprintly accepted payloads with heavy testing, ~20 runs per informing job was floated. Net cost might be less than we spend today.
Goal is to detect regressions in sprintly payloads, we could potentially apply similar logic to aggregation.
How could we implement this?
How could the data be visualized? Could we pin a baseline for comparison?
Is this done prior to payload acceptance or slowly over a week, something that would find problems after the payload was accepted and sent out?
- is blocked by
-
TRT-365 Add script to categorize payload rejections
- Closed