Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-3314

[fluentd] The passphrase can not be enabled when forwarding logs to Kafka

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Before this update, users encountered a limitation where they couldn't enable the passphrase for log forwarding to Kafka. This posed a security risk as sensitive information could be exposed.
      With this update, we have resolved the issue and now users can easily enable the passphrase for log forwarding to Kafka. This enhancement ensures secure transmission of log data, protecting it from unauthorized access.

      Show
      Before this update, users encountered a limitation where they couldn't enable the passphrase for log forwarding to Kafka. This posed a security risk as sensitive information could be exposed. With this update, we have resolved the issue and now users can easily enable the passphrase for log forwarding to Kafka. This enhancement ensures secure transmission of log data, protecting it from unauthorized access.
    • Bug Fix
    • Log Collection - Sprint 235, Log Collection - Sprint 236, Log Collection - Sprint 237

      Description of problem:

      The passphrase is not enabled in fluentd.conf when passphrase in forward secret.

      By the way,  There is a known issue already https://github.com/fluent/fluent-plugin-kafka/issues/382

      Version-Release number of selected component (if applicable):

      Logging 5.x

      How reproducible:

      always

      Steps to Reproduce:

      1. deploy kafka with ssl.client.auth=required
        git clone git@gitlab.cee.redhat.com:anli/aosqe-tools.git
        cd logging/log_template/kafka/kafka-2.4.1/
        sh 01_create-pki-cluster-client_passphase.sh
        sh 10_deploy-kafka-plaintext-sasl_ssl.sh
      1. use certificate with passphase to forward logs to kafka
        sh 20_create-clf-kafka-mutual_sasl_ssl_passphase.sh
        #oc create secret generic kafka-fluentd -from-file=ca-bundle.crt=ca/ca_bundle.crt --from-file=tls.crt=client/client.crt  -from-file=tls.key=client/client.key --from-literal=username=${kafka_user_name} --from-literal=password=${kafka_user_password} --from-literal=sasl_over_ssl=true --from-literal=sasl.enable=true --from-literal=sasl.mechanisms=PLAIN --from-literal=passphrase=aosqe2021 -n openshift-logging

      Actual results:

      #fluent.conf
      <label @KAFKA_APP>
        <match **>
          @type kafka2
          @id kafka_app
          brokers kafka.openshift-logging.svc.cluster.local:9093
          default_topic clo-topic
          use_event_time true
          username "#\{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/username') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/username','r') do |f|f.read end : ''}"
          password "#\{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/password') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/password','r') do |f|f.read end : ''}"
          ssl_client_cert_key '/var/run/ocp-collector/secrets/kafka-fluentd/tls.key'
          ssl_client_cert '/var/run/ocp-collector/secrets/kafka-fluentd/tls.crt'
          ssl_ca_cert '/var/run/ocp-collector/secrets/kafka-fluentd/ca-bundle.crt'
          sasl_over_ssl true
          <format>
            @type json
           .....
      </label>
      

      Expected results:

      #fluent.conf
      <label @KAFKA_APP>
        <match **>
          @type kafka2
         .....
          ssl_client_cert_key '/var/run/ocp-collector/secrets/kafka-fluentd/tls.key'
          *ssl_client_cert_key_password #\{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/passphase') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/passphase','r') do |f|f.read end : ''}"*
      
             .....
      </label>
      

      Additional info:

       

       

       

       

       

            [LOG-3314] [fluentd] The passphrase can not be enabled when forwarding logs to Kafka

            This issue requires Release Notes Text. Please modify the Release Note Text or set the Release Note Type to "None"

            Jeffrey Cantrill added a comment - This issue requires Release Notes Text. Please modify the Release Note Text or set the Release Note Type to "None"

            Errata Tool added a comment -

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Moderate: Logging Subsystem 5.7.2 - Red Hat OpenShift security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2023:3495

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Moderate: Logging Subsystem 5.7.2 - Red Hat OpenShift security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3495

            anli@redhat.com do you have any update about this issue?

            Vitalii Parfonov added a comment - anli@redhat.com do you have any update about this issue?

            CPaaS Service Account mentioned this issue in a merge request of openshift-logging / Log Collection Midstream on branch openshift-logging-5.7-rhel-8_upstream_feef2536018551260fe1e311501d4ff7:

            Updated US source to: 22bd590 Merge pull request #2034 from vparfonov/release-5.7-LOG-3314

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of openshift-logging / Log Collection Midstream on branch openshift-logging-5.7-rhel-8_ upstream _feef2536018551260fe1e311501d4ff7 : Updated US source to: 22bd590 Merge pull request #2034 from vparfonov/release-5.7- LOG-3314

            anli@redhat.com We didn't add anything related to scram_mechanism by default, but it't added in your test case: --from-literal=sasl.mechanisms=PLAIN

            Vitalii Parfonov added a comment - anli@redhat.com We didn't add anything related to scram_mechanism by default, but it't added in your test case: --from-literal=sasl.mechanisms=PLAIN

            Anping Li added a comment - - edited

            Why are scram_mechanism "PLAIN" enabled by default? If scram_mechanism isn't set, fluentd works well to send logs to kafka using mechanisms PLAIN.

            Anping Li added a comment - - edited Why are scram_mechanism "PLAIN" enabled by default? If scram_mechanism isn't set, fluentd works well to send logs to kafka using mechanisms PLAIN.

            Vitalii Parfonov added a comment - - edited

            Thanks anli@redhat.com, breaking line fixed here https://github.com/openshift/cluster-logging-operator/pull/2034.
            About PLAIN not supported, need to check
            UPD:
            This is limitation of ruby-kafka https://github.com/zendesk/ruby-kafka/blob/v1.5.0/lib/kafka/sasl/scram.rb#L9.

            MECHANISMS = {
                    "sha256" => "SCRAM-SHA-256",
                    "sha512" => "SCRAM-SHA-512",
                  }
            

            Vitalii Parfonov added a comment - - edited Thanks anli@redhat.com , breaking line fixed here https://github.com/openshift/cluster-logging-operator/pull/2034 . About PLAIN not supported, need to check UPD: This is limitation of ruby-kafka https://github.com/zendesk/ruby-kafka/blob/v1.5.0/lib/kafka/sasl/scram.rb#L9 . MECHANISMS = { "sha256" => "SCRAM-SHA-256" , "sha512" => "SCRAM-SHA-512" , }

            Anping Li added a comment -

            After I break the line, the collector pod raised "SCRAM mechanism PLAIN is not supported" as below.
            After I remove "scram_mechanism "PLAIN", the fluentd works fine.

            $oc logs collector-h5ng5
            Defaulted container "collector" out of: collector, logfilesmetricexporter
            POD_IPS: 10.131.0.80, PROM_BIND_IP: 0.0.0.0
            Setting each total_size_limit for 1 buffers to 20525125632 bytes
            Setting queued_chunks_limit_size for each buffer to 2446
            Setting chunk_limit_size for each buffer to 8388608
            /var/lib/fluentd/pos/journal_pos.json exists, checking if yajl parser able to parse this json file without any error.
            ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-linux]
            RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.900000 (default value: 2.000000)
            checking if /var/lib/fluentd/pos/journal_pos.json a valid json by calling yajl parser
            2023-06-01 06:08:40 +0000 [warn]: '@' is the system reserved prefix. It works in the nested configuration for now but it will be rejected: @timestamp
            2023-06-01 06:08:40 +0000 [warn]: '@' is the system reserved prefix. It works in the nested configuration for now but it will be rejected: @timestamp
            /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:8: warning: already initialized constant TRANSPORT_CLASS
            /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:3: warning: previous definition of TRANSPORT_CLASS was here
            /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:25: warning: already initialized constant SELECTOR_CLASS
            /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:20: warning: previous definition of SELECTOR_CLASS was here
            2023-06-01 06:08:42 +0000 [error]: unexpected error error_class=Kafka::SaslScramError error="SCRAM mechanism PLAIN is not supported."
              2023-06-01 06:08:42 +0000 [error]: /usr/local/share/gems/gems/ruby-kafka-1.5.0/lib/kafka/sasl/scram.rb:22:in `block in initialize'
            
            

            Anping Li added a comment - After I break the line, the collector pod raised "SCRAM mechanism PLAIN is not supported" as below. After I remove "scram_mechanism "PLAIN", the fluentd works fine. $oc logs collector-h5ng5 Defaulted container "collector" out of: collector, logfilesmetricexporter POD_IPS: 10.131.0.80, PROM_BIND_IP: 0.0.0.0 Setting each total_size_limit for 1 buffers to 20525125632 bytes Setting queued_chunks_limit_size for each buffer to 2446 Setting chunk_limit_size for each buffer to 8388608 / var /lib/fluentd/pos/journal_pos.json exists, checking if yajl parser able to parse this json file without any error. ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-linux] RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.900000 ( default value: 2.000000) checking if / var /lib/fluentd/pos/journal_pos.json a valid json by calling yajl parser 2023-06-01 06:08:40 +0000 [warn]: '@' is the system reserved prefix. It works in the nested configuration for now but it will be rejected: @timestamp 2023-06-01 06:08:40 +0000 [warn]: '@' is the system reserved prefix. It works in the nested configuration for now but it will be rejected: @timestamp /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:8: warning: already initialized constant TRANSPORT_CLASS /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:3: warning: previous definition of TRANSPORT_CLASS was here /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:25: warning: already initialized constant SELECTOR_CLASS /usr/local/share/gems/gems/fluent-plugin-elasticsearch-5.2.2/lib/fluent/plugin/elasticsearch_compat.rb:20: warning: previous definition of SELECTOR_CLASS was here 2023-06-01 06:08:42 +0000 [error]: unexpected error error_class=Kafka::SaslScramError error= "SCRAM mechanism PLAIN is not supported." 2023-06-01 06:08:42 +0000 [error]: /usr/local/share/gems/gems/ruby-kafka-1.5.0/lib/kafka/sasl/scram.rb:22:in `block in initialize'

            Anping Li added a comment - - edited

            ssl_client_cert_key_password and scram_mechanism are written in one line.

              <match **>
                @type kafka2
                @id kafka_app
                brokers kafka.openshift-logging.svc.cluster.local:9093
                default_topic clo-topic
                use_event_time true
                username "#{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/username') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/username','r') do |f|f.read end : ''}"
                password "#{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/password') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/password','r') do |f|f.read end : ''}"
                ssl_client_cert_key '/var/run/ocp-collector/secrets/kafka-fluentd/tls.key'
                ssl_client_cert '/var/run/ocp-collector/secrets/kafka-fluentd/tls.crt'
                ssl_ca_cert '/var/run/ocp-collector/secrets/kafka-fluentd/ca-bundle.crt'
                sasl_over_ssl true
                ssl_client_cert_key_password "#{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/passphrase') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/passphrase','r') do |f|f.read end : ''}"scram_mechanism "PLAIN"
                  ...
                 </match>
            
             oc logs collector-6bgq4
            Defaulted container "collector" out of: collector, logfilesmetricexporter
            POD_IPS: 10.128.2.78, PROM_BIND_IP: 0.0.0.0
            Setting each total_size_limit for 1 buffers to 20525125632 bytes
            Setting queued_chunks_limit_size for each buffer to 2446
            Setting chunk_limit_size for each buffer to 8388608
            /var/lib/fluentd/pos/journal_pos.json exists, checking if yajl parser able to parse this json file without any error.
            ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-linux]
            RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.900000 (default value: 2.000000)
            checking if /var/lib/fluentd/pos/journal_pos.json a valid json by calling yajl parser
            /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/basic_parser.rb:92:in `parse_error!': expected end of line at fluent.conf line 347,201 (Fluent::ConfigParseError)
            346:     sasl_over_ssl true
            347:     ssl_client_cert_key_password "#{File.exists?('/var/run/ocp-collector/secrets/kafka-fluentd/passphrase') ? open('/var/run/ocp-collector/secrets/kafka-fluentd/passphrase','r') do |f|f.read end : ''}"scram_mechanism "PLAIN"
            
                 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
            348:     <format>
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:133:in `parse_element'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:96:in `parse_element'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:96:in `parse_element'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:44:in `parse!'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:33:in `parse'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config.rb:58:in `parse'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config.rb:39:in `build'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/supervisor.rb:618:in `initialize'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/command/fluentd.rb:362:in `new'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/command/fluentd.rb:362:in `<top (required)>'
            	from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:83:in `require'
            	from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:83:in `require'
            	from /usr/local/share/gems/gems/fluentd-1.14.6/bin/fluentd:15:in `<top (required)>'
            	from /usr/local/bin/fluentd:23:in `load'
            	from /usr/local/bin/fluentd:23:in `<main>'
            [anli@preserve-docker-slave kafka-2.4.1]$ 
            
            

            Anping Li added a comment - - edited ssl_client_cert_key_password and scram_mechanism are written in one line. <match **> @type kafka2 @id kafka_app brokers kafka.openshift-logging.svc.cluster.local:9093 default_topic clo-topic use_event_time true username "#{File.exists?( '/ var /run/ocp-collector/secrets/kafka-fluentd/username' ) ? open( '/ var /run/ocp-collector/secrets/kafka-fluentd/username' , 'r' ) do |f|f.read end : ''}" password "#{File.exists?( '/ var /run/ocp-collector/secrets/kafka-fluentd/password' ) ? open( '/ var /run/ocp-collector/secrets/kafka-fluentd/password' , 'r' ) do |f|f.read end : ''}" ssl_client_cert_key '/ var /run/ocp-collector/secrets/kafka-fluentd/tls.key' ssl_client_cert '/ var /run/ocp-collector/secrets/kafka-fluentd/tls.crt' ssl_ca_cert '/ var /run/ocp-collector/secrets/kafka-fluentd/ca-bundle.crt' sasl_over_ssl true ssl_client_cert_key_password "#{File.exists?( '/ var /run/ocp-collector/secrets/kafka-fluentd/passphrase' ) ? open( '/ var /run/ocp-collector/secrets/kafka-fluentd/passphrase' , 'r' ) do |f|f.read end : ''}" scram_mechanism "PLAIN" ... </match> oc logs collector-6bgq4 Defaulted container "collector" out of: collector, logfilesmetricexporter POD_IPS: 10.128.2.78, PROM_BIND_IP: 0.0.0.0 Setting each total_size_limit for 1 buffers to 20525125632 bytes Setting queued_chunks_limit_size for each buffer to 2446 Setting chunk_limit_size for each buffer to 8388608 / var /lib/fluentd/pos/journal_pos.json exists, checking if yajl parser able to parse this json file without any error. ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [x86_64-linux] RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.900000 ( default value: 2.000000) checking if / var /lib/fluentd/pos/journal_pos.json a valid json by calling yajl parser /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/basic_parser.rb:92:in `parse_error!': expected end of line at fluent.conf line 347,201 (Fluent::ConfigParseError) 346: sasl_over_ssl true 347: ssl_client_cert_key_password "#{File.exists?( '/ var /run/ocp-collector/secrets/kafka-fluentd/passphrase' ) ? open( '/ var /run/ocp-collector/secrets/kafka-fluentd/passphrase' , 'r' ) do |f|f.read end : ''}" scram_mechanism "PLAIN" ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^ 348: <format> from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:133:in `parse_element' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:96:in `parse_element' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:96:in `parse_element' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:44:in `parse!' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config/v1_parser.rb:33:in `parse' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config.rb:58:in `parse' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/config.rb:39:in `build' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/supervisor.rb:618:in `initialize' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/command/fluentd.rb:362:in ` new ' from /usr/local/share/gems/gems/fluentd-1.14.6/lib/fluent/command/fluentd.rb:362:in `<top (required)>' from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:83:in `require' from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:83:in `require' from /usr/local/share/gems/gems/fluentd-1.14.6/bin/fluentd:15:in `<top (required)>' from /usr/local/bin/fluentd:23:in `load' from /usr/local/bin/fluentd:23:in `<main>' [anli@preserve-docker-slave kafka-2.4.1]$

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of openshift-logging / Log Collection Midstream on branch openshift-logging-5.7-rhel-8_ upstream _5992592a94f0861ef6c7dae08b09b5d5 : Updated 2 upstream sources

              vparfono Vitalii Parfonov
              rhn-support-anli Anping Li
              Anping Li Anping Li
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: