-
Bug
-
Resolution: Done
-
Critical
-
OSSM 2.4.5
When allowPartialMessage=true is set with maxRequestBytes in the extensionProviders, with every increasing the maxRequestBytes number, the envoy starts responding `413 - request_payload_too_large` more often.
After `1048577` bytes are set, all requests fail with response `413 - request_payload_too_large` even though the file doesn't exceed maxRequestByte. ( see statistic below )
Let's say, I have set an external authorizer (in .spec.meshConfig) like
extensionProviders: - envoyExtAuthzGrpc: includeRequestBodyInCheck: allowPartialMessage: true maxRequestBytes: 1024 packAsBytes: true port: 9191 service: external-authz-grpc.local name: sample-ext-authz-grpc
all my requests with 4MB file passed (3976877 Bytes) [since allowPartialMessage is set to true, otherwise they would fail as request_payload_too_large]
curl -XPOST -F 'data=@pdf2.pdf' http://istio-ingressgateway-istio-system.apps-crc.testing/file-upload/upload?key=TESTUSER
However, when I start increasing the value `maxRequestBytes`, the number of payload_too_large responses starts increasing as well.
With 1MB (1000000 ) maxRequestBytes, ~50% response are payload_too_large.
With 2MB (2000000 ) maxRequestBytes, ~100% response are payload_too_large. ( after 1048577, the number of failed requests is 100%, see the statistic below )
Also, when I set maxRequestBytes to more than the size of the file (e.g. 10MB), I am still getting payload_too_large. ( which shouldn't happen, since the max request bytes were not exceeded ) The interesting thing is that when I set `allowPartialMessage: false`, it starts working (since the max request bytes were not exceeded).
So this issue is happening only when allowPartialMessage is set to true and maxRequestBytes is set to more than 1048577 bytes regardless of whether the file size in the request was exceeded or not.
Log from istio-proxy container:
[2024-01-31T09:11:27.387Z] "POST /file-upload/upload?key=TESTUSER HTTP/1.1" 413 - request_payload_too_large - "-" 1350396 17 9 - "192.168.130.1,10.217.0.1" "curl/8.2.1" "f7d9278d-1314-904e-8fd2-1f576d679875" "istio-ingressgateway-istio-system.apps-crc.testing" "-" inbound|3000|| - 10.217.0.132:3000 10.217.0.1:0 outbound_.3000_._.file-uploader.file-upload-test.svc.cluster.local -
I can reproduce this with OCP 4.14 and 0SSM 2.4. With an external authorizer as a separate pod as well as a separate container in the same pod of the application (the case from the origin customer case ticket).
I have prepared a reproducer (see Steps to Reproduce section) for both cases, where the external authorizer is as:
- a separate pod: ReproducerExtAuthAsPod.zip
- a separate container in the same pod of the application (original customer case): ReproducerExtAuthAsContainer.zip
==================================
Statistics:
3MB pdf file, `./reproducer.sh` with 100 requests
- maxRequestBytes = 1000 (1KB)
Upload successful: 100 Payload Too Large: 0
- maxRequestBytes = 10000 (10KB)
Upload successful: 100 Payload Too Large: 0
- maxRequestBytes = 100000 (100KB)
Upload successful: 97 Payload Too Large: 3
- maxRequestBytes = 1000000 (1MB)
Upload successful: 49 Payload Too Large: 51
- maxRequestBytes = 1040000 (1,04MB)
Upload successful: 27 Payload Too Large: 73
- maxRequestBytes = 1048576 <---( looks like we are close to some threshold, since several payload_too_large responses are around 99%, with the next one, all requests fail, [30000 requests reproducer.sh script])
Upload successful: 254 Payload Too Large: 29746
- maxRequestBytes = 1048577 <--- ( from here, all requests will fail [30000 requests reproducer.sh script] )
Upload successful: 0 Payload Too Large: 30000
- maxRequestBytes = 2000000 (2MB)
Upload successful: 0 Payload Too Large: 100
- maxRequestBytes = 10000000 (10MB) [Here it should work since file doesn't exceed maxRequestBytes]
Upload successful: 0 Payload Too Large: 100
==================
When I tried with a bigger file (e.g. 200MB), according to curl Progress meter, the request failed after 9-14MB uploaded bytes
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 4 204M 100 17 4 10.1M 1900 1131M --:--:-- --:--:-- --:--:-- 1265M
but it doesn't matter, since it happens also with small files (e.g. 4MB), where the all bytes were uploaded
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3883k 100 17 100 3883k 1825 407M --:--:-- --:--:-- --:--:-- 421M
- causes
-
OSSM-6080 Rebuild `ext-authz` P/Z images
- Closed
-
OSSM-5272 While uploading a file to application pod, Envoy sends 413 request_payload_too_large even though allow_partial_message: true is set
- Closed
- is related to
-
OSSM-8251 MTT: Istio proxy returns 413 request_payload_too_large when allow_partial_message: true is set and maxRequestBytes is more than 1048577 bytes
- Closed