-
Bug
-
Resolution: Done
-
Blocker
-
Logging 5.4.0
-
False
-
None
-
False
-
NEW
-
OBSDA-7 - Adopting Loki as an alternative to Elasticsearch to support more lightweight, easier to manage/operate storage scenarios
-
VERIFIED
The logs can not be be forward to Lokistack. it raise error "api/logs/v1/audit/loki/api/v1/push 302 Found failed to find token".
2022-03-31 07:26:15 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.pods.openshift-kube-scheduler_openshift-kube-scheduler-ip-10-0-217-255.us-east-2.compute.internal_13badf26d710439f68c4bf3ff091fc47.wait-for-host-port.0.log:default" location=nil tag="kubernetes.var.log.pods.openshift-kube-scheduler_openshift-kube-scheduler-ip-10-0-217-255.us-east-2.compute.internal_13badf26d710439f68c4bf3ff091fc47.wait-for-host-port.0.log" time=2022-03-31 07:26:15.211746473 +0000 record={"time"=>"2022-03-31T03:09:34.905786179+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10259 and :10251 to be released."} 2022-03-31 07:26:15 +0000 [warn]: [loki_audit] failed to POST http://lokistack-sample-gateway-http.openshift-logging.svc:8080/api/logs/v1/audit/loki/api/v1/push (302 Found failed to find token)
The issue had been fixed in https://github.com/openshift/cluster-logging-operator/pull/1355
And cherry-picked to Logging 5.4 by https://github.com/openshift/cluster-logging-operator/pull/1418.
We still need to run some downstream tasks.
- Build logging-fluentd:1.14.5
- use logging-fluentd:1.14.5 in Logging 5.4
- And .....