-
Story
-
Resolution: Done
-
Undefined
-
None
-
None
-
5
-
False
-
False
-
Undefined
-
User Story
As an OpenShift cluster admin
I want to understand the resource footprint of the projected resource CSI driver
So that I can deploy the csi driver in resource constrained environments (3 node, single node)
Acceptance Criteria
- e2e test suite which measures the driver's CPU and memory footprint in an idle state (no shares, no pods referencing shares)
- must include artificial creation of customer namespces in addition to the openshift oneshttps://issues.redhat.com/browse/BUILD-256
- this is a scale testing exercise
- Idle state CPU and memory footprints are within an acceptable/tolerable limits.
- Produce report along a few dimensions
- can we continue to listen to every namespace by default, with a few exceptions where we ignore?
- or do we have to pivot and start with a few set of namespaces we do watch, and then the user has to update config for the driver to listen to more
- ideally some level of guidance on the "cost" of adding X number of namespace if we have to pivot
Docs Impact
None.
Notes
What is an acceptable resource footprint for the driver in an idle state? Example - less than 50m CPU, 100MB memory?
There was some guidance, teeing up of this subject by Derek Carr from OCP core engineering, architecture earlier this year.
This effort needs to re-engage with that and see what the latest guidance and support is.
Agreement from QE that this should be a "joint" effort, vs. development producing some new change, and QE then verifying.