-
Story
-
Resolution: Done
-
Normal
-
None
-
None
Description of problem:
Based on discussions w/ Peter, Mrunal, and Harshil it's my understanding that crun imposes significantly lower overhead compared to runc. And I've compared the ability to start a container using podman with various memory limits to confirm that runc is capable of starting with as little as 384KiB but runc requires 7680KiB to reliably start on a an OpenShift 4.14 host. podman --runtime /usr/bin/runc run --rm --memory 7680K fedora echo it works podman --runtime /usr/bin/crun run --rm --memory 384K fedora echo it works However cri-o enforces a 12MiB minimum overhead therefore the overhead required to successfully launch a pod remains significant despite more efficient runtime.
Version-Release number of selected component (if applicable):
Tested on 4.14.5
How reproducible:
Steps to Reproduce:
1. Launch the following pod under each runtime apiVersion: v1 kind: Pod metadata: name: stress spec: containers: - name: stress image: polinux/stress-ng # Just spin & wait forever command: ["/bin/bash", "-c", "--" ] args: ["stress-ng --vm 1 --vm-bytes 256M --vm-keep -t 3600s -v"] # 3600s means the stress test will run for 1 hour resources: limits: memory: "296Mi" 2. Check pod logs for whether or not the forked process is getting oom killed. 3. Adjust limit up or down as appropriate to determine overhead.
Actual results:
Both runtimes require approximately 40Mi of overhead
Expected results:
When using crun we should require less limit overhead due to its overall smaller footprint
Additional info:
Despite the overhead in limit requirement being the same, Peter does point out that the actual consumption on the system should be reduced in steady state, the burst at pod startup, and cost of periodic probes. Therefore there's still benefit from using crun over runc today, just not directly reflected in usable portion of pod resource limits.