Expected behaviour
When sending a request with a body of any size APIcast forwards the request to the upstream successfully.
Current behaviour
When sending a request with a body size of 1.3MB or larger the request is rejected with an HTTP 413 status code and APIcast logs the error seen below.
2025-10-21T10:20:23.276569439Z 2025/10/21 10:20:23 [error] 31#31: *33332 client intended to send too large body: 1330817 bytes, client: X.X.X.X, server: _, request: "POST /foo/bar?externalID=123456789 HTTP/1.1", host: "<REDACTED>", referrer: "https://<REDACTED>/"
Actions taken so far
- Tried to isolate the upstream server by sending requests directly from APIcast pod, the result was all the request succeeded
- Verified whether HAProxy is configured by default with a check on the request body but as the KCS suggests this is not possible with HAProxy so that rules out the openshift routes.
- Set up a reproducer with the same gateway configuration and request headers + body but all requests succeeded, was unable to trigger an HTTP 413 from APIcast or any OpenShift components
- Verified with the customer that APIcast's traffic is not going through a proxy and also verified that when we force it through the proxy the same pattern of behaviour is exhibited anyway so that rules out the corporate proxy
- Originally the behaviour was flip flopping between successful and failed and now the customer is unable to reproduce any successful requests, they are all failing equally since yesterday 20th October.
Tests executed in reproducer
Test A conditions
- Simple request with a 1.4MB file sent in the body.
- " " + payload policy configured with 0 bytes limit.
- " " + " " + CORS policy configured equally to yours.
The above ensured that the gateway configuration was as close to the customer's as possible.
Test B conditions
- Same request and gateway configuration as Test A but after each request add a header in from the customer's example failing requests from last week's testing.
- Add customisation documented here to ensure nothing was being missed.
In both scenarios I sent multiple requests sequentially to see if I could trigger the flip flopping behaviour but also to rule out the possibility of false positives.
All tests passed and I was unable to reproduce the same error logs or the HTTP 413 status code.
What is also strange is that the logs suggest that no requests are reaching authorisation logic nor being attempted to be forwarded to the upstream server, they simply are rejected immediately and the access log shows it is APIcast generating the 413 status code despite the fact the gateway is configured to not check the body due to the presence of the directive client_max_body_size 0;. This was also checked on the container directly:
more /opt/app-root/src/http.d/core.conf client_max_body_size 0
See private comments for log extracts from the environment showing the error logs and access log.
The request from Support here is where else to look and how to determine what is the source of the HTTP 413 status code if it truly isn't APIcast as we suspect??