Details
-
Feature Request
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
2
Description
Motivation: when developing constraints, it can be useful to run the solver for a fixed amount of calculation every time a new constraint is added (or modified) to see the impact on time (= speed).
The stepLimit tests are not reliable with limiting tests, it's better to use calculateCountLimit instead.
- Adjust the docs accordingly (see chapter 5 IIRC) and document calculateCountLimit benchmarks/tests (instead of stepLimit benchmarks/tests).
- Give this approach a good name (old name is "stepLimit").
- do a "find in path"
- Maybe remove the benchmark config on some examples, they might not be worth their wait
- Can we avoid the benchmark xml entirely? Something like BenchmarkFactory.createStepLimitBenchmark(solverConfig).build(myDataset)?