-
Bug
-
Resolution: Done
-
Critical
-
Logging 5.2.2
-
False
-
False
-
NEW
-
VERIFIED
-
-
Logging (LogExp) - Sprint 210, Logging (LogExp) - Sprint 211, Logging (LogExp) - Sprint 212, Logging (LogExp) - Sprint 214, Logging (LogExp) - Sprint 215, Logging (LogExp) - Sprint 216, Logging (LogExp) - Sprint 217
Based on RHBZ #1970942 we finally updated the Cluster Logging environment to version v5.2.2-21 and checked whether the issue is still present.
The problem is still present and as soon as http.max_header_size: 128kb is set, users are unable to connect to elasticsearch when going through the elasticsearch-proxy. Queries directly sent to elasticsearch are still working.
But no matter if fluentd or kibana, no resource is able to connect until we set elasticsearch-operator to Unmanaged and are setting http.max_header_size to *512kb* or even more. Once that change is applied it all works like a charm and no issue is found.
We therefore have set the elasticsearch-proxy to trace level to take a closer look at the requests and see what is generating that much header size. But all requests, no matter if from fluentd or from users via kibana they all look OK and are about 1 kb in size.
The only thing that is big (around 4 MB) is the caching (I think it's caching) of the available namespaces that elasticsearch-proxy regularly does.
- clones
-
LOG-1899 http.max_header_size set to 128kb causes communication with elasticsearch to stop working
- Closed