Details
-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.15
-
None
-
No
-
False
-
Description
Description of problem:
While running some perf/scale tests and upgrades on a 250/500 node clusters, we see that KAS memory usage jumps almost 2.5x when the KAS pods are restarting during the upgrade We had some success by setting an aggressive gomemlimit (60% of total)Our sizing guidelines are mainly dependent on this usage spike and wanted to check if anything could be done to reduce that spike
Version-Release number of selected component (if applicable):
OCP 4.15
How reproducible:
Always
Steps to Reproduce:
1. Load up the cluster with cluster-density-v2 2. Start upgrades 3. Monitor KAS usage