Uploaded image for project: 'Observability and Data Analysis Program'
  1. Observability and Data Analysis Program
  2. OBSDA-360

Allow users to configure collector kafka output message size

XMLWordPrintable

    • False
    • False
    • Undefined

      Proposed title of this feature request

      `max_send_limits_bytes` configurable in fluentd
      `max_bytes` in Vector

      What is the nature and description of the request?

      If a user configures fluentd/vector to Kafka according to the documentation of OpenShift it requires that the broker is accepting the default message size otherwise it refuses to accept and sends the error message

      020-11-17 12:40:01 +0000 [warn]: failed to flush the buffer. retry_time=27 next_retry_seconds=2020-11-17 12:45:11 +0000 chunk="5b44b9c6214a46a7c939c9244ef724b9" error_class=Kafka::MessageSizeTooLarge error="Kafka::MessageSizeTooLarge"
      

      This could be addressed on the server side but in some environments, users are not able to set this or do not want to do it because kafka is not good at handling large messages.

      One other way to do it would be to make the setting `max_send_limits_bytes` configurable for the user or the equivalent for Vector `batch.max_bytes` . This would imply that messages exceeding that limit would be dropped but this should be the decision of the client/user.

      Another option could be able to define the size of the log message read. Then, the part of the message exceeding is not sent, but the rest is, having sent and delivered the log message not exceeding the size.

      List any affected packages or components.

      • fluentd
      • Vector

              jamparke@redhat.com Jamie Parker
              rhn-support-tmicheli Tobias Michelis
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: