As a developer/tester I want to have a set of automated tests that run against the performance environment, these tests would run after changes land to check we didn't degrade performance. These tests could also push results to a graph of some sort to show us trends over time.
Running tests against a master branch weekly to check performance.
UX Requirements
- N/A
UI Requirements
- N/A
Documentation Requirements
- N/A
Backend Requirements
QE Requirements
- QE needs this capability to help with performance testing in the long haul.
Additional Information and Assumptions
- We should be able to leverage the performance cluster and a jenkins job running IQE
- We can leverage nise static data files and the work started here: https://github.com/project-koku/nise/tree/master/utility
- We can leverage the IQE tests by tweaking them to use larger datasets
- https://docs.google.com/document/d/17sWmckckzALCPwSQabpVQv_9Ks0L_96r9bbNs1Cy4Ps/edit#
Acceptance Criteria
- We have a subset of tests for all source types giving ingest/summary times
- We have a subset of tests for checking API response timings (preferably built around API calls made by the UI)
- We have a dashboard somewhere with graphs/results for quick comparisons