Description of problem:
Cluster running 4.10.52 had three aws-ebs-csi-driver-node pods begin to consume multiple GB of memory, causing heavy node memory pressure as the pods have no memory limit. All other aws-ebs-csi-driver-node pods were still in the 50-70MB range: NAME CPU(cores) MEMORY(bytes) aws-ebs-csi-driver-controller-59867579b-d6s2q 0m 397Mi aws-ebs-csi-driver-controller-59867579b-t4wgq 0m 276Mi aws-ebs-csi-driver-node-4rmvk 0m 53Mi aws-ebs-csi-driver-node-5799f 0m 50Mi aws-ebs-csi-driver-node-6dpvg 0m 59Mi aws-ebs-csi-driver-node-6ldzk 0m 65Mi aws-ebs-csi-driver-node-6mbk5 0m 54Mi aws-ebs-csi-driver-node-bkvsr 0m 50Mi aws-ebs-csi-driver-node-c2fb2 0m 62Mi aws-ebs-csi-driver-node-f422m 0m 61Mi aws-ebs-csi-driver-node-lwzbb 6m 1940Mi aws-ebs-csi-driver-node-mjznt 0m 53Mi aws-ebs-csi-driver-node-pczsj 0m 62Mi aws-ebs-csi-driver-node-pmskn 0m 3493Mi aws-ebs-csi-driver-node-qft8w 0m 68Mi aws-ebs-csi-driver-node-v5bpx 11m 2076Mi aws-ebs-csi-driver-node-vn8km 0m 84Mi aws-ebs-csi-driver-node-ws6hx 0m 73Mi aws-ebs-csi-driver-node-xsk7k 0m 59Mi aws-ebs-csi-driver-node-xzwlh 0m 55Mi aws-ebs-csi-driver-operator-8c5ffb6d4-fk6zk 5m 88Mi Deleting the pods caused them to recreate, with normal memory consumption levels.
Version-Release number of selected component (if applicable):
4.10.52
How reproducible:
Unknown
- blocks
-
OCPBUGS-13820 Excessive memory consumption of aws-ebs-csi-driver-node pods (for 4.11)
- Closed
- clones
-
OCPBUGS-13820 Excessive memory consumption of aws-ebs-csi-driver-node pods (for 4.11)
- Closed
- is blocked by
-
OCPBUGS-13887 Verify that upstream gRPC issue #4632 (leak of net.Conn) is fixed in 4.13
- Closed
- is cloned by
-
OCPBUGS-13887 Verify that upstream gRPC issue #4632 (leak of net.Conn) is fixed in 4.13
- Closed
- links to