Uploaded image for project: 'Observability and Data Analysis Program'
  1. Observability and Data Analysis Program
  2. OBSDA-352

Cluster LogForwarder to send Logs to Kafka with SASL authentication

    • Icon: Feature Feature
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • PM Logging
    • None
    • False
    • False
    • Undefined

      1. Proposed title of this feature request
      Send Logs to Kafka With SASL authentication
      2. What is the nature and description of the request?
      Currently, tls method is available to send logs to kaka CU requested to have SASL authentication log forwarding
      3. Why does the customer need this? (List the business requirements here)
      Currently, Customer forwarding Logs to Kafka Using fluent instances with a custom config map and this method will be deprecated, unable switch to the ClusterLogForwarder because there is only the option for plaintext and ssl without authentication or SSL with certificate-based authentication, hence cu want to have SASL authentication in Log forwarding,
      4. List any affected packages or components.

            [OBSDA-352] Cluster LogForwarder to send Logs to Kafka with SASL authentication

            jamparke@redhat.com , is that supported in Vector in RHOL 5.5 and newer? or in a newer release?

            Oscar Arribas Arribas added a comment - jamparke@redhat.com , is that supported in Vector in RHOL 5.5 and newer? or in a newer release?

            Jamie Parker added a comment - This is complete in Cluster Logging for Vector:   https://github.com/openshift/cluster-logging-operator/blob/master/docs/features/collection.adoc#authorization-and-authentication

            Only solution is to either use another Fluentd, configure it as you want (it's yours) and you basically setup us to send to this new instance (similar to what customers have done in 3.11); or fo to unmanaged and make the modifications you need directly. But bear in mind that "unmanaged" means we can't support you anymore.

            Christian Heidenreich (Inactive) added a comment - Only solution is to either use another Fluentd, configure it as you want (it's yours) and you basically setup us to send to this new instance (similar to what customers have done in 3.11); or fo to unmanaged and make the modifications you need directly. But bear in mind that "unmanaged" means we can't support you anymore.

            Do we have any alternative to do this, some workaround? The migration has already started. We were thinking of using a Kafka Conect to consume directly from Elastic, as we cannot make migration unviable ...

            Rogerio Ferreira (Inactive) added a comment - Do we have any alternative to do this, some workaround? The migration has already started. We were thinking of using a Kafka Conect to consume directly from Elastic, as we cannot make migration unviable ...

            So PLAIN should be available soon with this ticket: LOG-1369 which is part of this epic LOG-1022. Let me know if this fits your customers expectation.

            Christian Heidenreich (Inactive) added a comment - So PLAIN should be available soon with this ticket: LOG-1369 which is part of this epic LOG-1022 . Let me know if this fits your customers expectation.

            He use SASL Plain, sorry.

            Rogerio Ferreira (Inactive) added a comment - He use SASL Plain, sorry.

            SASL is not really enough for me to understand the requirements. Again, Fluentd supports three types under SASL: GSSAPI, Plain, and SCRAM. Which one is your customer using?

            Christian Heidenreich (Inactive) added a comment - SASL is not really enough for me to understand the requirements. Again, Fluentd supports three types under SASL: GSSAPI, Plain, and SCRAM. Which one is your customer using?

            I removed the client's name so as not to make some questions public ...

             

            Rogerio Ferreira (Inactive) added a comment - I removed the client's name so as not to make some questions public ...  

            We cannot change the authentication method for now.

            Rogerio Ferreira (Inactive) added a comment - We cannot change the authentication method for now.

            The Bank is migrating to Openshift 4.6, it used IBM Cloud Private, which in place of EFK uses ELK, and the logstash allowed this path.

            Rogerio Ferreira (Inactive) added a comment - The Bank is migrating to Openshift 4.6, it used IBM Cloud Private, which in place of EFK uses ELK, and the logstash allowed this path.

              jamparke@redhat.com Jamie Parker
              rhn-support-puraut Purab Raut (Inactive)
              Votes:
              11 Vote for this issue
              Watchers:
              16 Start watching this issue

                Created:
                Updated:
                Resolved: