-
Bug
-
Resolution: Done
-
Critical
-
Logging 5.4.0
-
False
-
False
-
NEW
-
VERIFIED
-
Before this update, the change of the path from which the collector reads container logs caused certain records to be forwarded to the wrong indices. With this update, the collector now uses the correct configuration to resolve the issue.
-
-
Logging (Core) - Sprint 213, Logging (Core) - Sprint 214
Description of problem:
Logs under openshift-* projects are sent to app* index:
$ oc exec elasticsearch-cdm-7gic0ozw-1-7458ffb455-kw6qz -- es_util --query=app*/_search?pretty |grep "namespace_name" Defaulted container "elasticsearch" out of: elasticsearch, proxy "namespace_name" : "openshift-image-registry", "namespace_name" : "openshift-image-registry", "namespace_name" : "openshift-multus", "namespace_name" : "openshift-multus", "namespace_name" : "openshift-kube-storage-version-migrator", "namespace_name" : "openshift-kube-storage-version-migrator", "namespace_name" : "openshift-kube-storage-version-migrator", "namespace_name" : "openshift-kube-storage-version-migrator", "namespace_name" : "openshift-kube-storage-version-migrator", "namespace_name" : "openshift-kube-storage-version-migrator",
I find in fluent.conf, the source directory is changed from `/var/log/containers/` to `/var/log/pods/`, but I'm not sure if it's the root cause:
http://pastebin.test.redhat.com/1023797
Version-Release number of selected component (if applicable):
cluster-logging.5.4.0-37
How reproducible:
Always
Steps to Reproduce:
- deploy logging
- check data in ES
Actual results:
Expected results:
Additional info:
no such issue when testing with cluster-logging.5.4.0-36
- links to
- mentioned on
(9 mentioned on)