-
Bug
-
Resolution: Done-Errata
-
Major
-
None
-
4.12, 4.11
-
Moderate
-
None
-
1
-
SDN Sprint 240, SDN Sprint 241
-
2
-
Rejected
-
False
-
I haven't gone back to pin down all affected versions, but I wouldn't be surprised if we've had this exposure for a while. On a 4.12.0-ec.2 cluster, we have:
cluster:usage:resources:sum{resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io"}
currently clocking in around 67983. I've gathered a dump with:
$ oc --as system:admin -n openshift-network-diagnostics get podnetworkconnectivitychecks.controlplane.operator.openshift.io | gzip >checks.gz
And many, many of these reference nodes which no longer exist (the cluster is aggressively autoscaled, with nodes coming and going all the time). We should fix garbage collection on this resource, to avoid consuming excessive amounts of memory in the Kube API server and etcd as they attempt to list the large resource set.
- clones
-
OCPBUGS-1341 Node churn leaks PodNetworkConnectivityChecks
- Closed
- depends on
-
OCPBUGS-1341 Node churn leaks PodNetworkConnectivityChecks
- Closed
- links to
-
RHBA-2023:4905 OpenShift Container Platform 4.13.z bug fix update