-
Bug
-
Resolution: Done-Errata
-
Major
-
4.12, 4.11
-
Moderate
-
None
-
3
-
OCP VE Sprint 225, OCP VE Sprint 226, OCP VE Sprint 227, OCP VE Sprint 228, OCP VE Sprint 229, OCP VE Sprint 230, OCP VE Sprint 231, OCP VE Sprint 232, OCP VE Sprint 233, OCP VE Sprint 234, OCP VE Sprint 235
-
11
-
Rejected
-
False
-
-
N/A
-
Release Note Not Required
I haven't gone back to pin down all affected versions, but I wouldn't be surprised if we've had this exposure for a while. On a 4.12.0-ec.2 cluster, we have:
cluster:usage:resources:sum{resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io"}
currently clocking in around 67983. I've gathered a dump with:
$ oc --as system:admin -n openshift-network-diagnostics get podnetworkconnectivitychecks.controlplane.operator.openshift.io | gzip >checks.gz
And many, many of these reference nodes which no longer exist (the cluster is aggressively autoscaled, with nodes coming and going all the time). We should fix garbage collection on this resource, to avoid consuming excessive amounts of memory in the Kube API server and etcd as they attempt to list the large resource set.
- is blocked by
-
SDN-3636 Kube 1.26 rebase for CNO
- Closed
- is cloned by
-
OCPBUGS-17721 [release-4.13] Node churn leaks PodNetworkConnectivityChecks
- Closed
- is depended on by
-
OCPBUGS-17721 [release-4.13] Node churn leaks PodNetworkConnectivityChecks
- Closed
- links to
-
RHEA-2023:5006 rpm