-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
Logging 5.8.z, Logging 5.9.z, Logging 6.0.z, Logging 6.1.z, Logging 6.2.z, Logging 6.3.z, Logging 6.4.z
-
Incidents & Support
-
False
-
-
False
-
NEW
-
NEW
-
Bug Fix
-
-
-
Important
Description of problem:
Big queries to the KubeAPI affects to the cluster performance.
Usually, it's recommended to limit the queries to don't return "all" as this creates an impact on the KubeAPI and control planes.
Each Vector collector list and watch all the pods available in the cluster always and this creates an impact, mostly, in those cases where it's not needed. Some examples:
1. Created a second clusterlogforwarder CR for only collecting logs from an specific namespace. Then, 2 collector pods run per node, both asking for all the pods and metadata available in the cluster.
2. 1 clusterlogforwarder CR, but not collecting application logs, only infrastructure
3. 1 clusterlogforwarder CR, but only collecting logs from pods in some specific namespaces
3. etc
In the bug https://issues.redhat.com/browse/LOG-7535 were shared some filters that can be used for doing a better query to the KubeAPI and reducing the impact:
Note: every collector pod ask by default to the KubeAPI for the pods running in the same node that the collector runs.
Version-Release number of selected component (if applicable):
Cluster Logging 5 and 6
Vector collector
How reproducible:
Always
Steps to Reproduce:
- Deploy Logging and configure with Vector to get only logs from a single namespace or infrastructure logs
Actual results:
Get the KubeAPI audit logs and verify that the serviceAccount used for running the collector pod is asking for list and watch all the pods running in the cluster.
Each collector pod ask for all the pods running in the same node when it should be expected the query to be only for the pods really needed.
Expected results:
Have implemented filters in Vector for only listing and watching from the KubeAPI the pods set in the clusterLogForwarder.