Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-2116

Allow autoscaling via custom metrics

XMLWordPrintable

    • False
    • False

      The Problem

      Not every workload uses CPU and Memory to determine if they need to be scaled up or down. For example, a webapp exposing an HTTP API needs to scale based on incoming traffic (number of HTTP requests).

      Autoscaling is a key feature of Kubernetes. There are different options you can use to scale either your workload (either the number of replicas or the resources attached to it) or the number of nodes in your cluster. With this feature, we will focus on scaling workload by either increasing or decreasing the number of replicas aka Horizontal Pod Autoscaler.

      The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The controller manager obtains the metrics from either the resource metrics API (CPU and Memory), or the custom metrics API (for all other metrics). Currently, OpenShift only exposes an implementation for the resource metrics API which is available out-of-the-box with a standard OpenShift installation. It does not provide an implementation for the custom metrics API and therefore, customers can’t use any other metric to successfully operate a system of reliable, high SLA type of applications with minimal to non downtime during, for example, peak times.

              gausingh@redhat.com Gaurav Singh
              gausingh@redhat.com Gaurav Singh
              Votes:
              0 Vote for this issue
              Watchers:
              28 Start watching this issue

                Created:
                Updated:
                Resolved: