-
Feature Request
-
Resolution: Done
-
Critical
-
None
-
None
-
False
-
False
-
-
-
-
-
The Problem
Not every workload uses CPU and Memory to determine if they need to be scaled up or down. For example, a webapp exposing an HTTP API needs to scale based on incoming traffic (number of HTTP requests).
Autoscaling is a key feature of Kubernetes. There are different options you can use to scale either your workload (either the number of replicas or the resources attached to it) or the number of nodes in your cluster. With this feature, we will focus on scaling workload by either increasing or decreasing the number of replicas aka Horizontal Pod Autoscaler.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The controller manager obtains the metrics from either the resource metrics API (CPU and Memory), or the custom metrics API (for all other metrics). Currently, OpenShift only exposes an implementation for the resource metrics API which is available out-of-the-box with a standard OpenShift installation. It does not provide an implementation for the custom metrics API and therefore, customers can’t use any other metric to successfully operate a system of reliable, high SLA type of applications with minimal to non downtime during, for example, peak times.
- is incorporated by
-
OCPNODE-1366 GA support for "Custom Metric Autoscaler"
- Closed
-
OBSDA-1 Allow autoscaling via custom metrics
- Closed
-
OCPSTRAT-484 Custom Metric Autoscaler (CMA)
- Closed
- is related to
-
OCPNODE-708 Tech preview of KEDA
- Closed
-
OCPSTRAT-484 Custom Metric Autoscaler (CMA)
- Closed
- links to
- mentioned on