We perform microbenchmark performance testing of JTA and we have Jenkins CI jobs (btny-pulls-performance and narayana-performance-version-comparison) that do regression testing against different versions of the product.
These tests use JMH to perform the test runs. The results of each run are reported as a CSV file in the following example format:
This example shows a comparison between versions 5.5.32.Final and 5.9.0.Final. The data shows that
we measured using a mode called "thrpt" (ie transactions per second or TPS) and the Score shows the TPS was 217253 vers
us 216552. There is also a Score Error which I understand indicates the upper and lower error bars for the measurement.
None of our jobs take account of the error bars when reporting regressions. What we need to do is look at the Score Error to see if the runs are within tolerance before reporting a regression.
An example of how to manually run a regression test is as follows:
The results are in the named csv file.
Then do the same for the second version