• Icon: Sub-task Sub-task
    • Resolution: Done
    • Icon: Major Major
    • None
    • None
    • None
    • False
    • None
    • False
    • NetObserv - Sprint 238, NetObserv - Sprint 239, NetObserv - Sprint 240, NetObserv - Sprint 241, NetObserv - Sprint 242, NetObserv - Sprint 243

      For dynamic baselining, we want the following high-level behavior:

      • Perf test is executed for a given workload
      • If doing a performance baseline, dynamically fetch the current baseline for that workload from Elasticsearch, using some tooling (potentially custom Python script)
        • If the test passes (based on tolerancy config) then update the baseline for that workload with the new UUID, using some tooling (potentially custom Python script)
        • Otherwise, fail and do not update the baseline
      • If not running a baseline, skip entirely and jut output statistics as normal

              nweinber1 Nathan Weinberg
              nweinber1 Nathan Weinberg
              Mehul Modi
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: