Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
2.13.2 GA
-
None
-
False
-
None
-
False
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
Not Started
-
Important
Description
Setup:
- OCP cluster-version: 4.12, 4.13
- 3scale-operator-bundle-container:
- v4.12: registry-proxy.engineering.redhat.com/rh-osbs/iib:490901
- v4.13: registry-proxy.engineering.redhat.com/rh-osbs/iib:490917
- 3scale-apicast-operator-bundle-container:
- v4.12: registry-proxy.engineering.redhat.com/rh-osbs/iib:490924
- v4.13: registry-proxy.engineering.redhat.com/rh-osbs/iib:490948
- 3scale version: 2.13.3
- Reproduction steps:
- Deploy 3scale operator and apicast operator on OCP cluster
- Check deployment status using $oc get pods -n 3scale
Expected result:
- backend-listener pod status should be in running state
- It is observed that backend-listener pod is restarting multiple times
Further Observation:
1. Using $oc describe pod backend_listener_pod -n 3scale, below messages are observed.
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Mon, 15 May 2023 03:58:06 +0000
Finished: Mon, 15 May 2023 06:18:25 +0000
Ready: True
Restart Count: 57
Limits:
cpu: 1
memory: 700Mi
Requests:
cpu: 500m
memory: 550Mi
2. Running backend-listener pod also shows last state where it has terminated and restarted, reason as OOMKilled (exit code: 137), pod used more memory than allowed.
Workaround:
- Updated spec field for APIManager for ResourceRequirementsEnabled=false, issue is not observed further.
- Reference: https://github.com/3scale/3scale-operator/blob/master/doc/apimanager-reference.md