-
Epic
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
Benchmarks
-
False
-
False
-
NEW
-
To Do
-
NEW
-
0% To Do, 0% In Progress, 100% Done
Goals
Create benchmarks in the following categories:
- Quick benchmarks that can easily be run in ad-hoc settings by developers:
- to evaluate performance impact of code changes during development.
- to drive profiling of code for optimization work.
- CI pre-commit benchmarks: a standard benchmark based on 1. to test for regressions before PRs can commit.
- CI occasional (daily/weekly) benchmarks: longer-running throughput and overload tests.
- benchmarks as a set of components that can be configured to implement different scenarios, and can be extended in future.
- benchmarks for individual components (vector, fluentd etc.) and in-cluster collector.
Non-Goals
It is more important to get a basic set of easy to use, automated benchmarks than it is to cover every possible scenario. The goal of this epic is to get a baseline set of benchmarks in place quickly, to be improved by future stories and epics.
Motivation
We need to
- Measure performance impact of changes to avoid regression and optimize effectively.
- Provide numbers to give realistic guidelines and set user expectations.
Acceptance Criteria
- Developers can easily run benchmarks on work in progress and get a quick sense of better/worse for changes or experiments.
- CI rejects PRs if they cause a significant performance regression
(Note: careful of false rejections due to variations in the test environment) - Occasional (daily/weekly) CI jobs provide a performance statistics report with warnings of significant regression.
Notes
This work should build on the work already done at: ViaQ/cluster-logging-collector-benchmarks
- relates to
-
LOG-1732 Evaluate Performance Impact of Change
- Closed