Issue : When custom error page is configured in APIcast in that case use of Logging policy is not effective.
Reproducible : Yes, Alway Reproducible using following steps.
step1: Install and deploy a self-managed APIcast using APIcast operator.
step2: Follow [1] this and enable a Logging policy for any of product (for this product configure a slow backend which respond after 65 seconds so we can generate HTTP_GATEWAY_TIMEOUT error), enable logs in json format, Use following json_object_config.
"key": "host.name", "value": "host", "value_type": "liquid" "key": "@timestamp", "value": "time_iso8601 | slice: 0,18.msec| slice: -4,15+00:00", "value_type": "liquid" "key": "service.id", "value": "service.id", "value_type": "liquid"
Step3: Deploy custom error page like as follows.
$ cat custom-error-config-env.lua
Contents of the "custom-error-config-env"-secret io.output("/opt/app-root/src/apicast.d/custom_error.conf") io.write("error_page 504 @json_response;\n", "location @json_response {\n", " internal;\n", " content_by_lua_block Unknown macro: {n", " ngx.print("{'message':'someone is taking way too much time','status':'504'}");n", " ngx.exit(ngx.HTTP_GATEWAY_TIMEOUT);n", " } \n", "}\n"); io.close()
$ oc create secret generic custom-error --from-file=./custom-error-config-env.lua
Step4: Use this secret (custom-error) in apicast yaml and save the changes. e.g.
apicast-test yaml:
spec: adminPortalCredentialsRef: name: 3scaleportal customEnvironments: - secretRef: name: custom-error deploymentEnvironment: staging httpsPort: 8443 logLevel: notice replicas: 1
Step5: Test API call using curl command and check APIcast logs.
Request:
$ curl -v -k "https://apicast.apps.cndsno3.cnc.example.com:443/myapp/slow.jsp?user_key=1973e4bc2520ef441b5db9c004690962"
Response:
< HTTP/2 504 < server: openresty < date: Mon, 20 May 2024 07:05:18 GMT < content-type: text/plain < Connection #0 to host apicast.apps.cndsno3.cnc.example.com left intact {'message':'someone is taking way too much time','status':'504'}
APIcast logs:
2024/05/20 07:05:18 [error] 25#25: *13 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.128.0.2, server: _, request: "GET /myapp/slow.jsp?user_key=1973e4bc2520ef441b5db9c004690962 HTTP/2.0", upstream: "http://tomcat.example.com:8080/myapp/slow.jsp?user_key=1973e4bc2520ef441b5db9c004690962", host: "apicast.apps.cndsno3.cnc.example.com" [20/May/2024:07:05:18 +0000] apicast.apps.cndsno3.cnc.example.com:8443 10.128.0.2:60064 "GET /myapp/slow.jsp?user_key=1973e4bc2520ef441b5db9c004690962 HTTP/2.0" 504 64 (60.318)
Conclusion:
We did NOT seen any messages in json format in APIcast pod logs.
Step6: Redeploy the APIcast by removing custom error page configuration.
apicast-test yaml:
spec: adminPortalCredentialsRef: name: 3scaleportal deploymentEnvironment: staging httpsPort: 8443 logLevel: notice replicas: 1
Step7: Again Test API call using curl command and check APIcast logs.
Request:
$ curl -v -k "https://apicast.apps.cndsno3.cnc.example.com:443/myapp/slow.jsp?user_key=1973e4bc2520ef441b5db9c004690962"
Response:
< HTTP/2 504 < server: openresty < date: Mon, 20 May 2024 7:50:22 GMT < content-type: text/plain < Connection #0 to host apicast.apps.cndsno3.cnc.example.com left intact
APIcast logs:
$ oc logs -f deployment/apicast-apicast-test {"host.name":"apicast.apps.cndsno3.cnc.example.com","service.id":"2","@timestamp":"2024-05-20T07:50:22.442+00:00"}
Conclusion:
We did can see messages in json format in APIcast pod logs.