-
Enhancement
-
Resolution: Done
-
Major
-
None
-
None
-
False
-
-
False
-
-
UPerf shows a regression of 4% (204'000 -> 196'000).
The culprit is Average. Apparently, using an AtomicReferenceArray for the samples killed perf.
Now design simply uses a sample array of a given capacity. When a value is added, a random index is computed and the value written to the sample array at the given index.
If more that 1 write happen to the same index at exactly the same time, the write might be incorrect (as doubles are not written atomically). Same for reads: a read might get corrupted by a concurrent write to the same index.
Those failures occur less with increasing capacity and can be tolerated, because they increase speed due to no locking.
After all, an average is an approximation, so that should be fine.