-
Bug
-
Resolution: Done
-
Blocker
-
None
-
1.2.0, 1.3.0, 1.4.0
-
None
Knative on Kubernetes (like GKE) can consistently scale from zero in around 2 seconds. On OpenShift, it always seems to take at least 10 seconds.
https://gist.githubusercontent.com/bbrowning/b2d9ae321e0f0aaf304a26368278f97a/raw/c443412e1ead83aaaed8dbb9c6dc8e1202f1a337/gistfile1.txt is a gist with logs from the worker node while scaling a Knative Service from zero.
Specifically, you'll notice a 7 second gap in here where we need more logging to see what's taking 7 seconds:
Dec 17 03:28:30 ip-10-0-174-45 hyperkube[2277]: I1217 03:28:30.132130 2277 manager.go:1011] Added container: "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b71d398_207d_11ea_8028_0a1263433394.slice/crio-a40b6ac715335fcd3227fb2e5c84f14ae52cd319d703f8718873cecfba2f81a8.scope" (aliases: [k8s_POD_test-selector-74fc74c8d8-xl4b7_bbrowning-sandbox_4b71d398-207d-11ea-8028-0a1263433394_0 a40b6ac715335fcd3227fb2e5c84f14ae52cd319d703f8718873cecfba2f81a8], namespace: "crio") Dec 17 03:28:37 ip-10-0-174-45 crio[1889]: 2019-12-17T03:28:37Z [verbose] Add: bbrowning-sandbox:test-selector-74fc74c8d8-xl4b7:openshift-sdn:eth0 {"cniVersion":"0.3.1","interfaces":[{"name":"eth0","sandbox":"/proc/3285126/ns/net"}],"ips":[{"version":"4","interface":0,"address":"10.128.2.78/23"}],"routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"},{"dst":"224.0.0.0/4"},{"dst":"10.128.0.0/14"}],"dns":{}}
This will likely end up being something we need to track down in OpenShift itself, OpenShift SDN, and/or cri-o.