-
Bug
-
Resolution: Done
-
Undefined
-
Logging 5.5.0
-
False
-
None
-
False
-
NEW
-
OBSDA-108 - Distribute an alternate Vector Log Collector
-
VERIFIED
-
Logging (Core) - Sprint 218
Version-Release number of selected component (if applicable):
Logging 5.5
Server Version: 4.10.0-0.nightly-2022-04-27-034457
Kubernetes Version: v1.23.5+70fb84c
Description of the problem:
When using Vector as collector Journald logs are not sent to the Log store.
How reproducible:
Always
Steps to reproduce:
1 Deploy the Cluster Logging and Elasticsearch operators.
2 Create a Cluster Logging instance.
apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" annotations: logging.openshift.io/preview-vector-collector: enabled spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 10h infra: maxAge: 10h audit: maxAge: 10h elasticsearch: nodeCount: 1 storage: {} resources: limits: memory: "4Gi" requests: memory: "1Gi" proxy: resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 collection: logs: type: "vector" vector: {}
3 Check for journald logs in Elasticsearch.
$ oc rsh elasticsearch-cdm-n0pfqkzd-1-6b674977f-nzr7v Defaulted container "elasticsearch" out of: elasticsearch, proxy sh-4.4$ es_util --query=infra*/_search?pretty -d ' > { > "query": { > "exists": { > "field": "_SYSTEMD_INVOCATION_ID" > } > } > }' { "took" : 10, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 0, "max_score" : null, "hits" : [ ] } } sh-4.4$
- links to
- mentioned on
(3 mentioned on)