Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4243

HTTP request header again too big, causing interaction with elasticsearch to fail

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Before this update, on very large OCP clusters with more than 8000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default is set from 128kb to 512kb resolves the issue and search queries against Elasticsearch can continue working.
      Show
      Before this update, on very large OCP clusters with more than 8000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default is set from 128kb to 512kb resolves the issue and search queries against Elasticsearch can continue working.
    • Bug Fix
    • Log Storage - Sprint 238, Log Storage - Sprint 239
    • Moderate

      Description of problem:

      Based on LOG-1899, the UUID of the namespace has been removed and thus dramatically reduced the size of the request HTTP header.

      Yet in large scale OpenShift Container Platform 4 - Cluster the header size can still massively grow and eventually reach the current value set for allowed HTTP header size.

      • OpenShift Container Platform 4 - Clusters with 3500 namespaces start to see the problem

      In such a case the below error is reported and interaction with kibana is failing.

      {"message":"HTTP header is larger than 131072 bytes.: [too_long_http_header_exception] HTTP header is larger than 131072 bytes.","statusCode":400,"error":"Bad Request"}
      

      Since http.max_header_size: 128kb is not enough we urgently require this to be doubled to prevent issues from happening.

      • Right now, `elasticsearch` is set to Unmanaged state to manually increase
        HTTP header size and get things back to work.

      Version-Release number of selected component (if applicable):

      OpenShift Container Platform 4 - Cluster Logging 5.6.7

      How reproducible:

      Always

      Steps to Reproduce:

      1. Install OpenShift Container Platform 4 with OpenShift Container Platform 4 - Cluster Logging 5.6.7
      2. Create +3500 namespaces with workload logging
      3. Have a user that is able to access all those namespaces and try to access elasticsearch

      Actual results:

      {"message":"HTTP header is larger than 131072 bytes.: [too_long_http_header_exception] HTTP header is larger than 131072 bytes.","statusCode":400,"error":"Bad Request"}
      

      Expected results:

      No problem being reported

      Additional info:

      Manually increasing http.max_header_size solves the problem but leaves elasticsearch in {Unmanaged}} state. Also, it was working for a good amount of time based on LOG-1899, which means the only feasible approach to solve the problem is to increase http.max_header_size default value

            ptsiraki@redhat.com Periklis Tsirakidis
            rhn-support-sreber Simon Reber
            Anping Li Anping Li
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: