-
Task
-
Resolution: Done
-
Normal
-
None
-
None
-
None
-
False
-
-
False
-
-
SWATCH-688 should fix all memory bloat in the tally process when fully implemented. There will still be issues where the kafka message commit can take too long and will cause problems. The tradeoff we are looking to investigate here is:
- Assuming that we are looking at our worst case memory/processing structure of a customer who is using 100% VDC hypervisor/guests with 8 guests per hypervisor
- How does the memory usage scale as we grow the number of hypervisors/guests in a tally run.
- If we are no longer failing due to memory usage, at what number do we start to fail due to the kafka commit timeout?
We expect some negative performance impact, but not large (<30% increase in duration should be fine).
PR: https://github.com/RedHatInsights/rhsm-subscriptions/pull/1778
FYI, we can retrigger an image build if it expires via `/retest` comment on the PR.
- is related to
-
SWATCH-685 Split instance update and Tally into two transactions
- Closed
- mentioned in
-
Page Loading...