-
Bug
-
Resolution: Obsolete
-
Normal
-
None
-
4.13.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Moderate
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Pod is not respecting the memory limits set on the deployment. In one of the deployment, the limits.memory is set to 512Mi, but the memory utilization by pod is 727.1Mi ~~~ $ oc adm top pods -n publix-s0eauths-dev NAME CPU(cores) MEMORY(bytes) eauth-authorization-api-74c6c45d75-vzfb6 5m 727.1Mi ~~~ It is expected that the pod should not consume more memory than the limit. Also if the consumption is more then the pod should get terminated with OOM, but the pod is still in running state.
Version-Release number of selected component (if applicable):
OCP 4.13.z
Actual results:
Pod is consuming more memory than the limit set in deployment.
Expected results:
Pod should not consume more memory than the limit set in deployment.
Additional info:
Below are the deployment yaml and pod yaml for reference :
Deployment yaml :
~~~
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
~~~
Pod yaml :
~~~
name: eauth-authorization-api
ports:
- containerPort: 8080
name: web-api-port
protocol: TCP
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
~~~
Customer re-deployed the pod. Still the memory consumption is more than the limit :
~~~
$ oc adm top pods -n publix-s0eauths-dev
NAME CPU(cores) MEMORY(bytes)
eauth-authorization-api-854ddbcdf6-p9w2t 5m 752Mi
~~~