-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.21
-
None
-
None
-
False
-
-
None
-
Moderate
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Some storage containers do not have `readOnlyRootFilesystem: true`.
- All kube-rbac-proxy sidecars in all drivers
- AWS hypershift (without EFS)
- (<deployment name> <container name>)
- aws-ebs-csi-driver-controller token-minter
- aws-ebs-csi-driver-operator token-minter
- aws-ebs-csi-driver-operator aws-ebs-csi-driver-operator
- vSphere
- all vmware-vsphere-csi-driver-controller containers
- all vmware-vsphere-csi-driver-webhook containers
- Azure HyperShift
- azure-disk-csi-driver-operator azure-disk-csi-driver-operator
- azure-file-csi-driver-operator azure-file-csi-driver-operator
- GCE (without filestore)
- all gcp-pd-csi-driver-controller containers
- OpenStack HyperShift
- openstack-cinder-csi-driver-operator openstack-cinder-csi-driver-operator
- openstack-manila-csi-controllerplugin csi-driver-nfs
Version-Release number of selected component (if applicable):
- for non-hypershift clusters: registry.ci.openshift.org/ocp/release:4.21.0-0.ci-2025-11-25-210640
- random current version for the hypershift ones
Steps to reproduce:
Get deployments.json from CI artifacts (e.g. here) and run this jq:
# Parse JSON and find containers without readOnlyRootFilesystem: true jq -r ' .items[] | .metadata.name as $deployment | .metadata.namespace as $namespace | .spec.template.spec.containers[] | select( (.securityContext.readOnlyRootFilesystem // false) != true ) | "\($namespace)/\($deployment) \(.name)" ' deployments.json
For hypershift, I ended up converting yaml files from the hosted control plane namespace dump into deployments.json
( echo '{"apiVersion": "v1", "items": ' jq -s '.' < <(for f in $*.yaml; do yq -o=json "$f"; done) echo '}' ) > deployments.json
- blocks
-
STOR-2560 [csi-driver-operators] Configure containers to set readOnlyRootFilesystem to true [starting in OCP 4.20]
-
- Dev Complete
-