-
Bug
-
Resolution: Done
-
Normal
-
None
-
Logging 6.2.z
-
Future Sustainability
-
3
-
False
-
-
False
-
NEW
-
NEW
-
Release Note Not Required
-
-
-
Log Collection - Sprint 269
-
Moderate
Description of problem:
In the Vector collector pods is observed errors as reached the MAX_EVENT_SIZE: 256KB when log forwarding to cloudwatch:
2025-04-11T06:38:37.368665Z ERROR sink{component_kind="sink" component_id=output_cloudwatch component_type=aws_cloudwatch_logs}: vector::internal_events::aws_cloudwatch_logs: Encoded event is too long. size=354390 max_size=262094 error_code="message_too_long" error_type="encoder_failed" stage="processing" internal_log_rate_limit=true
The error is coded in the "aws_cloudwatch_logs.rs" and it's set as a constant where the MAX_EVENT_SIZE is set to 256 * 1024 in this line
A couple of days ago, Amazon CloudWatch Logs increased the maximum log event size to 1 MB as described in the next link. Then, it should be needed to adapt the MAX_EVENT_SIZE in Vector when log forwarding to CloudWatch to be 1 MB.
Version-Release number of selected component (if applicable):
All the Logging v5 versions and Logging 6.0.z, 6.1.z and 6.2.z
How reproducible:
Always
Steps to Reproduce:
- Configure the log collector to log forward to Cloudwatch
- Generate an event in one application where the event size is bigger than 256 KB
- Review the collector logs
Actual results:
The event fails to be sent with the error:
2025-04-11T06:38:37.368665Z ERROR sink{component_kind="sink" component_id=output_cloudwatch component_type=aws_cloudwatch_logs}: vector::internal_events::aws_cloudwatch_logs: Encoded event is too long. size=354390 max_size=262094 error_code="message_too_long" error_type="encoder_failed" stage="processing" internal_log_rate_limit=true
Expected results:
The event is log forwarded always that not exceeding the new Amazon CloudWatch log event size of 1 MB.
Additional info:
- clones
-
LOG-7013 Increase MAX_EVENT_SIZE from 256KB to 1MB for CloudWatch output
-
- Closed
-