Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-27469

"Deploy Image" with "Serverless Deployment", Scaling "Min Pods"/"Max Pods" should set "autoscaling.knative.dev/min-scale"/max-scale not minScale/maxScale

XMLWordPrintable

    • Moderate
    • No
    • ODC Sprint 3251
    • 1
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Hide
      * Previously, the annotations to set scale bound values were setting to `autoscaling.knative.dev/maxScale` and `autoscaling.knative.dev/minScale`. With this update, the annotations to set scale bound values are updated to `autoscaling.knative.dev/min-scale` and `autoscaling.knative.dev/max-scale` to determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. (link:https://issues.redhat.com/browse/OCPBUGS-27469[*OCPBUGS-27469*])
      Show
      * Previously, the annotations to set scale bound values were setting to `autoscaling.knative.dev/maxScale` and `autoscaling.knative.dev/minScale`. With this update, the annotations to set scale bound values are updated to `autoscaling.knative.dev/min-scale` and `autoscaling.knative.dev/max-scale` to determine the minimum and maximum numbers of replicas that can serve an application at any given time. You can set scale bounds for an application to help prevent cold starts or control computing costs. (link: https://issues.redhat.com/browse/OCPBUGS-27469 [* OCPBUGS-27469 *])
    • Bug Fix
    • In Progress

      Description of problem:

      Creating a Serverless Deployment with "Scaling" "Min Pods"/"Max Pods" options set, uses deprecated knative annotations "autoscaling.knative.dev/minScale" / "maxScale",

      the correct current ones are "autoscaling.knative.dev/min-scale" / "max-scale"

      The same problem with "autoscaling.knative.dev/targetUtilizationPercentage" , which should be "autoscaling.knative.dev/target-utilization-percentage"

      Prerequisites (if any, like setup, operators/versions):

      Serverless operator

      Steps to Reproduce

      1. Install serverless operator
      2. Create KnativeServing in knative-serving namespace
      3. create a test "foobar" namespace
      4. Go to <console>/deploy-image/ns/foobar
      5. Use gcr.io/knative-samples/helloworld-go  as "Image name from external registry" (or any webserver image listening on :8080)
      6. Choose "Serverless Deployment" for the "resource type"
      7. Click on "Scaling" in "Click on the names to access advanced options for ..."
      8. Set "2" for "Min Pods" and "3" for "Max Pods"
      9. Create

      Actual results:

      The created ksvc resource has

       spec:
        template:
          metadata:
            annotations:
              autoscaling.knative.dev/maxScale: "3"
              autoscaling.knative.dev/minScale: "2"
              autoscaling.knative.dev/targetUtilizationPercentage: "70"

      Expected results:

      The created ksvc should have

      spec:
        template:
          metadata:
            annotations:
              autoscaling.knative.dev/max-scale: "3"
              autoscaling.knative.dev/min-scale: "2"
              autoscaling.knative.dev/target-utilization-percentage: "70"
      

      Reproducibility (Always/Intermittent/Only Once): Always

      Build Details:

      4.14.8

      Workaround:

      none required ATM, current serverless still supports the deprecated "minScale"/"maxScale" annotations.

      Additional info:

      https://docs.openshift.com/serverless/1.31/knative-serving/autoscaling/serverless-autoscaling-developer-scale-bounds.html

      https://issues.redhat.com/browse/SRVKS-910

       

            rh-ee-lprabhu Lokananda Prabhu
            maschmid@redhat.com Marek Schmidt
            Sanket Pathak Sanket Pathak
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: