-
Bug
-
Resolution: Done
-
Major
-
None
-
False
-
False
-
NEW
-
OBSDA-53 - Add support for using Kerberos authentication for Kafka inside the Log Forwarding API
-
VERIFIED
-
Before this change, it was possible for the collector to generate a warning where the chunk byte limit was exceeding an emitted event. This change allows you to tune the readline limit to resolve the issue as advised by the upstream documentation
-
Logging (Core) - Sprint 216
- Proposed title of this feature request
max_send_limit_bytes modifiable via API - What is the nature and description of the request
When the `buffer_chunk_size` in fluentd is configured the following warnings are written to the log file:2020-11-20 09:26:28 +0000 [warn]: chunk bytes limit exceeds for an emitted event stream: 1471470bytes
This is caused by the `read_lines_limit` setting in the `in_tail` plugin of fluentd. To avoid the warning message it is advised to set the `read_lines_limit` to a smaller value as can be found here. However this variable can not be set via the settings of the openshift-logging operator. For customers who need to set `buffer_chunk_size` to match their external backend it would be a benefit to avoid unnecessary warnings if they were able to configure this setting as well. Furthermore customers are experiencing data loss because of this.
- List any affected packages or components.
- fluentd
- clones
-
LOG-1415 Allow users to tune fluentd
- Closed
- links to