Uploaded image for project: 'Distributed Tracing'
  1. Distributed Tracing
  2. TRACING-3173

jaeger-operator pod restarting with OOMKilled with the default memory value

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • rhosdt-2.9
    • None
    • Jaeger
    • None
    • Tracing Sprint # 237
    • Important
    • Customer Facing

      DESCRIPTION

      The `jaeger-operator` pod distributed with "Red Hat OpenShift distributed tracing platform" version 1.42.0-5 is continually restarting with OOMKilled status with the default memory and CPU assigned.

      $ oc get pods
      NAME                               READY   STATUS    RESTARTS   AGE
      jaeger-operator-5bfdc966bf-28kt4   2/2     Running   161        25h
      
      $ oc get pod -n openshift-distributed-tracing -o name -l name=jaeger-operator -o yaml  |grep -i oomkilled
                reason: OOMKilled 
      
      $ oc get pod -n openshift-distributed-tracing -o name -l name=jaeger-operator -o yaml  |egrep -i "( requests| limits)" -A 2
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 100m
                memory: 128Mi
      --
              limits:
                cpu: 500m
                memory: 128Mi
              requests:
                cpu: 5m
                memory: 64Mi

      If the memory is increased to 1.5G, then, not more restarts/OOMKilled.

      VERSIONS

      • OCP 4.10
      • Red Hat OpenShift distributed tracing platform 1.42.0-5

      What happens?

      The Jaeger operator pod restarted with OOMKilled since hitting the default `limits.memory` value of 512Mi making it mostly not usable.

      What's expected

      The Jaeger operator pod doesn't restart with OOMKill and the values assigned by default are good enough not to hit the `limits.memory` causing the restart of the pod

            rhn-support-iblancas Israel Blancas Alvarez
            rhn-support-ocasalsa Oscar Casal Sanchez
            Ishwar Kanse Ishwar Kanse
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: