-
Bug
-
Resolution: Done
-
Normal
-
None
-
False
-
None
-
False
-
NEW
-
NEW
-
-
Bug Fix
-
-
-
Log Storage - Sprint 238, Log Storage - Sprint 239
-
Moderate
Description of problem:
Based on LOG-1899, the UUID of the namespace has been removed and thus dramatically reduced the size of the request HTTP header.
Yet in large scale OpenShift Container Platform 4 - Cluster the header size can still massively grow and eventually reach the current value set for allowed HTTP header size.
- OpenShift Container Platform 4 - Clusters with 3500 namespaces start to see the problem
In such a case the below error is reported and interaction with kibana is failing.
{"message":"HTTP header is larger than 131072 bytes.: [too_long_http_header_exception] HTTP header is larger than 131072 bytes.","statusCode":400,"error":"Bad Request"}
Since http.max_header_size: 128kb is not enough we urgently require this to be doubled to prevent issues from happening.
- Right now, `elasticsearch` is set to Unmanaged state to manually increase
HTTP header size and get things back to work.
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4 - Cluster Logging 5.6.7
How reproducible:
Always
Steps to Reproduce:
- Install OpenShift Container Platform 4 with OpenShift Container Platform 4 - Cluster Logging 5.6.7
- Create +3500 namespaces with workload logging
- Have a user that is able to access all those namespaces and try to access elasticsearch
Actual results:
{"message":"HTTP header is larger than 131072 bytes.: [too_long_http_header_exception] HTTP header is larger than 131072 bytes.","statusCode":400,"error":"Bad Request"}
Expected results:
No problem being reported
Additional info:
Manually increasing http.max_header_size solves the problem but leaves elasticsearch in {Unmanaged}} state. Also, it was working for a good amount of time based on LOG-1899, which means the only feasible approach to solve the problem is to increase http.max_header_size default value
- clones
-
LOG-4243 HTTP request header again too big, causing interaction with elasticsearch to fail
- Closed
- links to
- mentioned on