Uploaded image for project: 'Knative Serving'
  1. Knative Serving
  2. SRVKS-936

activator close to memory limit, OOMKilled during soak test

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Critical Critical
    • 1.31.0
    • 1.23.0, 1.25.0, 1.30.0
    • None
    • None
    • False
    • None
    • False
    • Hide
      Running Serverless with a large number of Knative services can make Knative activator pods run close to their default memory limits of 600MB. They can possibly get restarted if the memory consumption reaches the limits. Requests and limits for the activator deployment can be configured through KnativeServing custom resource as follows:
      apiVersion: operator.knative.dev/v1alpha1
      kind: KnativeServing
      metadata:
        name: knative-serving
        namespace: knative-serving
      spec:
        deployments:
        - name: activator
          resources:
          - container: activator
            requests:
              cpu: 300m
              memory: 60Mi
            limits:
              cpu: 1000m
              memory: 1000Mi
      Show
      Running Serverless with a large number of Knative services can make Knative activator pods run close to their default memory limits of 600MB. They can possibly get restarted if the memory consumption reaches the limits. Requests and limits for the activator deployment can be configured through KnativeServing custom resource as follows: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata:   name: knative-serving   namespace: knative-serving spec:   deployments:   - name: activator     resources:     - container: activator       requests:         cpu: 300m         memory: 60Mi       limits:         cpu: 1000m         memory: 1000Mi

      During a soak test, with about 212 ksvcs in total on the cluster (with a load of about 14MBps according to total knative-serving-ingress Receive Bandwidth)

      the activators are running close to their default memory limit of 600MB and sometimes getting OOMKilled,

      oc get pods -n knative-serving 
      NAME                                     READY   STATUS    RESTARTS      AGE
      activator-774d4ff4b8-4l5vp               2/2     Running   1 (35h ago)   36h
      activator-774d4ff4b8-x29hn               2/2     Running   2 (31h ago)   36h
      autoscaler-d5f4ccf94-fn4hp               2/2     Running   0             36h
      autoscaler-d5f4ccf94-gwb45               2/2     Running   0             36h
      autoscaler-hpa-6d5f65fc85-4s2hw          2/2     Running   0             36h
      autoscaler-hpa-6d5f65fc85-vg77m          2/2     Running   0             36h
      controller-7dcdc9c96b-8jfzk              2/2     Running   0             36h
      controller-7dcdc9c96b-fhz9c              2/2     Running   0             36h
      domain-mapping-5d6666dd64-27tbq          2/2     Running   0             36h
      domain-mapping-5d6666dd64-k9px8          2/2     Running   0             36h
      domainmapping-webhook-575f6887b5-5czsl   2/2     Running   0             36h
      domainmapping-webhook-575f6887b5-jfm6c   2/2     Running   0             36h
      webhook-5cf8d5ccfd-87c4s                 2/2     Running   0             36h
      webhook-5cf8d5ccfd-rgngp                 2/2     Running   0             36h
      
      oc describe pod -n knative-serving activator-774d4ff4b8-4l5vp | grep Reason
            Reason:       OOMKilled
      
      oc describe pod -n knative-serving activator-774d4ff4b8-x29hn | grep Reason
            Reason:       OOMKilled
      

              rh-ee-rlehmann Reto Lehmann
              maschmid@redhat.com Marek Schmidt
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: