-
Feature
-
Resolution: Done
-
Critical
-
None
-
None
Story: As a developer or user I want to be able to adjust the resource requests of Quay components so that I can run in a smaller test clusters or also request more resource upfront to avoid partially degraded Quay pods.
Background: A default deployment with all features turned on needs minimally 10 CPU cores allocated, turning off auto-scaling with a minimum replica set of 2 the amount of CPU cores drops to 5,5. As these are dedicated cores, if the amount of resources aren't available in the cluster the Quay components will fail to schedule. This is too large to even schedule a Quay deployment in typical PoC clusters. On the other hand, since the Quay pods run so many different processes it's possible for CPU and especially memory consumption to go up significantly so that some processes will be OOM-killed by the kernel. This will leave behind partially degraded Quay pods.
Acceptance criteria:
- resource request and limit overrides exists for all components that have pods running
- Quay
- Clair
- Mirroring Workers
- Postgres (Clair & Quay)
- it has to be possible to specify an override in a way that it removes / does not specify any resource request and limit
- without any overrides current default values applies
Optional:
- review default requests to be in line with the following:
-
- quay components only request as much CPU cores as they minimally need to start up by default
- quay components only request as much RAM they need to fulfil a baseline while idle by default
Notes:
- https://docs.openshift.com/container-platform/4.9/nodes/pods/nodes-pods-priority.html
- https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
- Look for a stress test so we can evaluate the chosen values
- account is impacted by
-
PROJQUAY-5455 Pod crashing consistently due to memory limit.
- Closed
- relates to
-
PROJQUAY-5454 Allow pod memory limit to be increased.
- Closed
-
PROJQUAY-3060 Ability to override resource limits/requests for pods
- Closed