Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-43703

[release-4.12] Look for an option to identify double header encoding which cause disruption with HAProxy 2.6

XMLWordPrintable

    • +
    • Important
    • None
    • 2
    • NE Sprint 261
    • 1
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Hide
      * Previously, when upgrading to {product-title} version 4.14, HAProxy 2.6 enforced strict RFC 7230 compliance and rejected requests with multiple `Transfer-Encoding` headers. Duplicate `Transfer-Encoding` headers were configured at the application level, so the requests resulted in `502 Bad Gateway` errors and service disruptions. With this release, cluster administrators can use a procedure to proactively detect applications that would send duplicate `Transfer-Encoding` headers before upgrading their clusters. This allows administrators to mitigate the issue in advance and prevents service disruption. (link:https://issues.redhat.com/browse/OCPBUGS-43703[*OCPBUGS-43703*])
      -------
      ### Release Note: Detecting Duplicate Transfer-Encoding Headers Before Upgrading to OpenShift 4.14

      #### Summary:
      A critical issue has been identified where upgrading to OpenShift 4.14 (which includes HAProxy 2.6) causes service disruptions due to duplicate `Transfer-Encoding` headers configured at the application level. This behaviour, compliant with RFC 7230, results in rejected responses with a 502 Bad Gateway error, impacting customer services.

      To address this, cluster administrators can now use a procedure to proactively detect applications that are sending duplicate `Transfer-Encoding` headers before upgrading. This allows administrators to mitigate the issue in advance, preventing service disruption.

      #### Key Points:
      - **Problem**: When upgrading to OpenShift 4.14, HAProxy 2.6 enforces strict RFC 7230 compliance, rejecting requests with multiple `Transfer-Encoding` headers. This causes disruptions in services that rely on misconfigured applications.
      - **Solution**: Cluster administrators are advised to follow the procedure outlined in [Red Hat Solution 7055002](https://access.redhat.com/solutions/7055002) to identify and rectify duplicate `Transfer-Encoding` headers before upgrading. This proactive step helps avoid the potential for widespread service failures.

      #### Next Steps:
      To assist with this, the following procedure is recommended:
      1. **Pre-Upgrade Detection**: Use the newly introduced method to scan for duplicate `Transfer-Encoding` headers in routes and applications running in the cluster before upgrading to OpenShift 4.14.
      2. **Metrics and Monitoring**: Check for metrics related to duplicate headers (`haproxy_backend_duplicate_te_header_total`) to identify problematic routes.
      3. **Mitigation**: Address applications with duplicate headers prior to upgrading, ensuring compliance with the new HAProxy 2.6 restrictions.

      Following this approach will help prevent service disruption post-upgrade, allowing a smoother transition to OpenShift 4.14.
      Show
      * Previously, when upgrading to {product-title} version 4.14, HAProxy 2.6 enforced strict RFC 7230 compliance and rejected requests with multiple `Transfer-Encoding` headers. Duplicate `Transfer-Encoding` headers were configured at the application level, so the requests resulted in `502 Bad Gateway` errors and service disruptions. With this release, cluster administrators can use a procedure to proactively detect applications that would send duplicate `Transfer-Encoding` headers before upgrading their clusters. This allows administrators to mitigate the issue in advance and prevents service disruption. (link: https://issues.redhat.com/browse/OCPBUGS-43703 [* OCPBUGS-43703 *]) ------- ### Release Note: Detecting Duplicate Transfer-Encoding Headers Before Upgrading to OpenShift 4.14 #### Summary: A critical issue has been identified where upgrading to OpenShift 4.14 (which includes HAProxy 2.6) causes service disruptions due to duplicate `Transfer-Encoding` headers configured at the application level. This behaviour, compliant with RFC 7230, results in rejected responses with a 502 Bad Gateway error, impacting customer services. To address this, cluster administrators can now use a procedure to proactively detect applications that are sending duplicate `Transfer-Encoding` headers before upgrading. This allows administrators to mitigate the issue in advance, preventing service disruption. #### Key Points: - **Problem**: When upgrading to OpenShift 4.14, HAProxy 2.6 enforces strict RFC 7230 compliance, rejecting requests with multiple `Transfer-Encoding` headers. This causes disruptions in services that rely on misconfigured applications. - **Solution**: Cluster administrators are advised to follow the procedure outlined in [Red Hat Solution 7055002]( https://access.redhat.com/solutions/7055002 ) to identify and rectify duplicate `Transfer-Encoding` headers before upgrading. This proactive step helps avoid the potential for widespread service failures. #### Next Steps: To assist with this, the following procedure is recommended: 1. **Pre-Upgrade Detection**: Use the newly introduced method to scan for duplicate `Transfer-Encoding` headers in routes and applications running in the cluster before upgrading to OpenShift 4.14. 2. **Metrics and Monitoring**: Check for metrics related to duplicate headers (`haproxy_backend_duplicate_te_header_total`) to identify problematic routes. 3. **Mitigation**: Address applications with duplicate headers prior to upgrading, ensuring compliance with the new HAProxy 2.6 restrictions. Following this approach will help prevent service disruption post-upgrade, allowing a smoother transition to OpenShift 4.14.

      This is a clone of issue OCPBUGS-43095. The following is the description of the original issue:

      Description of problem:

      With the HAProxy 2.6, the double header enconding configured at the application level is generating disruption on criticals customer services.
      Cluster Admin is requesting if is possible to get a procedure to identify those application header from OCP in order to anticipate the potencial issue reported at solution https://access.redhat.com/solutions/7055002 before upgrading the cluster to OCP 4.14
      This is a blocker for biggest account as Support Exception are being requested to deploy older version of HAProxy on  OCP 4.14 clusters

      Version-Release number of selected component (if applicable):

          4.14

      How reproducible:

          Reported at slack thread https://redhat-internal.slack.com/archives/CCH60A77E/p1724247463732519

      Steps to Reproduce:

          1. With an  OCP 4.14 route:
      $ curl -kv --header 'Content-Type: application/json' --data '{"code": 0}' -H "transfer-encoding: chunked" -H "transfer-encoding: chunked" --resolve httpd-ex-sunbro.apps.sunbro3.la
      b.psi.pnq2.redhat.com:80:10.74.208.127 http://httpd-ex-sunbro.apps.sunbro3.lab.psi.pnq2.redhat.com     
      ...
      > Content-Type: application/json 
      > transfer-encoding: chunked 
      > transfer-encoding: chunked 
      > 
      < HTTP/1.1 400 Bad request 
      < Content-length: 90 
      < Cache-Control: no-cache 
      < Connection: close 
      < Content-Type: text/html     

      Actual results:

          502/400 error results

      Expected results:

      200 OK    

      Additional info:

          

              amcdermo@redhat.com Andrew McDermott
              openshift-crt-jira-prow OpenShift Prow Bot
              Shudi Li Shudi Li
              Votes:
              1 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: