-
Bug
-
Resolution: Done
-
Critical
-
1.23.0, 1.25.0, 1.30.0
-
None
-
None
-
False
-
None
-
False
-
During a soak test, with about 212 ksvcs in total on the cluster (with a load of about 14MBps according to total knative-serving-ingress Receive Bandwidth)
the activators are running close to their default memory limit of 600MB and sometimes getting OOMKilled,
oc get pods -n knative-serving NAME READY STATUS RESTARTS AGE activator-774d4ff4b8-4l5vp 2/2 Running 1 (35h ago) 36h activator-774d4ff4b8-x29hn 2/2 Running 2 (31h ago) 36h autoscaler-d5f4ccf94-fn4hp 2/2 Running 0 36h autoscaler-d5f4ccf94-gwb45 2/2 Running 0 36h autoscaler-hpa-6d5f65fc85-4s2hw 2/2 Running 0 36h autoscaler-hpa-6d5f65fc85-vg77m 2/2 Running 0 36h controller-7dcdc9c96b-8jfzk 2/2 Running 0 36h controller-7dcdc9c96b-fhz9c 2/2 Running 0 36h domain-mapping-5d6666dd64-27tbq 2/2 Running 0 36h domain-mapping-5d6666dd64-k9px8 2/2 Running 0 36h domainmapping-webhook-575f6887b5-5czsl 2/2 Running 0 36h domainmapping-webhook-575f6887b5-jfm6c 2/2 Running 0 36h webhook-5cf8d5ccfd-87c4s 2/2 Running 0 36h webhook-5cf8d5ccfd-rgngp 2/2 Running 0 36h
oc describe pod -n knative-serving activator-774d4ff4b8-4l5vp | grep Reason Reason: OOMKilled oc describe pod -n knative-serving activator-774d4ff4b8-x29hn | grep Reason Reason: OOMKilled
- is documented by
-
SRVKS-937 [DOC] Update system deployment config docs
- Closed
- is related to
-
SRVKS-1093 net-kourier-controller OOMKilled on soak test with default limits
- Closed
- links to