-
Bug
-
Resolution: Done
-
Major
-
None
-
None
-
1
-
False
-
None
-
False
-
-
-
Tracing Sprint # 237
-
Important
-
Customer Facing
DESCRIPTION
The `jaeger-operator` pod distributed with "Red Hat OpenShift distributed tracing platform" version 1.42.0-5 is continually restarting with OOMKilled status with the default memory and CPU assigned.
$ oc get pods NAME READY STATUS RESTARTS AGE jaeger-operator-5bfdc966bf-28kt4 2/2 Running 161 25h $ oc get pod -n openshift-distributed-tracing -o name -l name=jaeger-operator -o yaml |grep -i oomkilled reason: OOMKilled $ oc get pod -n openshift-distributed-tracing -o name -l name=jaeger-operator -o yaml |egrep -i "( requests| limits)" -A 2 limits: cpu: 500m memory: 512Mi requests: cpu: 100m memory: 128Mi -- limits: cpu: 500m memory: 128Mi requests: cpu: 5m memory: 64Mi
If the memory is increased to 1.5G, then, not more restarts/OOMKilled.
VERSIONS
- OCP 4.10
- Red Hat OpenShift distributed tracing platform 1.42.0-5
What happens?
The Jaeger operator pod restarted with OOMKilled since hitting the default `limits.memory` value of 512Mi making it mostly not usable.
What's expected
The Jaeger operator pod doesn't restart with OOMKill and the values assigned by default are good enough not to hit the `limits.memory` causing the restart of the pod
- is cloned by
-
TRACING-3204 Remove resource limits for Tempo Operator but keep the resource.requests
- Closed
- links to
-
RHSA-2023:117866 Red Hat OpenShift distributed tracing 2.9.0 operator/operand containers