-
Spike
-
Resolution: Done
-
Normal
-
None
-
None
-
None
-
None
-
3
-
False
-
-
False
-
-
-
PODAUTO - Sprint 260
An advanced customer will know what to do with these resource and rate overrides because they'll know where they're capping out, but a less advanced customer will probably want some guidance on what to set the overrides to – e.g. "how do I tell if I am in a large cluster" or maybe "what is the correlation between the number of nodes/objects/pods and the settings of these tunables".
I'd like to get to something like "we suggest X resource per Y of something" and/or "the VPA needs X API query rate per Y of something" but in order to do that we probably need to do some testing.
- We discussed making the VPA "automatically scale itself" once we figure that out, but that is outside the scope of this card. For now we just need to figure it out and document some guidance.
- We also discussed removing the limits entirely, but:
- we believe we're required (as an openshift operator) to have a limit
- at least on the resource limit we don' t want to end up with "burstable" QoS and
- on the rate limit, the limit is there to protect the cluster, limiting the VPA just makes it work slower but otherwise doesn't damage it, whereas the VPA could definitely hinder the cluster with too many requests if it were unlimited
Acceptance Criteria:
- A document exists that contains override configuration examples
- A document exists that contains some guidance on how to
- is related to
-
PODAUTO-243 Allow the VPA to scale itself
- New
-
PODAUTO-240 Create a blog post about the VPA benchmarking spike
- Review
- relates to
-
PODAUTO-77 Allow cluster admins to specify VPA API client rates
- Closed
-
PODAUTO-78 Allow cluster admins to specify CPU & Memory requests and limits of VPA controllers
- Closed