-
Bug
-
Resolution: Done
-
Blocker
-
Logging 5.5.0
-
False
-
None
-
False
-
NEW
-
OBSDA-108 - Distribute an alternate Vector Log Collector
-
VERIFIED
-
Log Collection - Sprint 222, Log Collection - Sprint 223
Version of components:
Clusterlogging.v5.5.0
Elasticsearch-operator.v5.5.0
Server Version: 4.11.0-0.nightly-2022-08-02-014045
Kubernetes Version: v1.24.0+9546431
Description of the problem:
When using Vector as collector the cpu/memory requests/limits cannot be configured.
Steps to reproduce the issue:
- Create ClusterLogging instance with Vector as collector and cpu/memory requests/limits parameters set.
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" annotations: logging.openshift.io/preview-vector-collector: enabled spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 10h infra: maxAge: 10h audit: maxAge: 10h elasticsearch: nodeCount: 1 storage: {} resources: limits: memory: "4Gi" requests: memory: "1Gi" proxy: resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 collection: type: "vector" resources: limits: memory: 736Mi cpu: 100m requests: cpu: 100m memory: 736Mi
- Check the collector daemonset, no requests/limits are added to the daemonset.
oc describe ds collector | grep -iE "requests|limits" -A 4
- Create the ClusterLogging instance with Fluentd as collector and notice the requests/limits are set correctly.
$ oc describe daemonset collector | grep -iE "requests|limits" -A 4 Limits: cpu: 100m memory: 736Mi Requests: cpu: 100m memory: 736Mi