Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-1186

Add keepalive option for fluent forward config

    XMLWordPrintable

Details

    • Passed
    • Green
    • Green
    • NEW
    • Hide
      Previously, the logging collector was creating more socket connections than necessary.

      With this fix, the logging collector re-uses the existing socket connection to send logs.
      Show
      Previously, the logging collector was creating more socket connections than necessary. With this fix, the logging collector re-uses the existing socket connection to send logs.

    Description

      What is the problem that your customer is facing?

      Currently there is no way to set the keepalive in the fluent forward config when forwarding logs to external fluentd.

      By default keepalive is set to false, which means a new socket is created for every chunk sent. I feel that this is inefficient and would prefer to use the existing socket connection to send logs.

      Especially since with the keepalive set to false, fluentd doesn't seem to close the open sockets, even if they're not being used anymore. They only get closed when the eventual network tcp timeout is reached.

      What are your expectations for this feature

      Enable keepalive by default.

      Have you done this before and/or outside of support and if yes, how? (Optional)

      We have tested this using the legacy method of using the config map to configure forwarding and it works the way we expect it to.

      Doc Considerations

      SeeĀ https://issues.redhat.com/browse/RHDEVDOCS-2750

      Attachments

        Issue Links

          Activity

            People

              rhn-engineering-aconway Alan Conway
              cvogel1 Christian Heidenreich
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: