Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4009

TLS configuration for multiple Kafka brokers is not created in Vector

XMLWordPrintable

    • 1
    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Prior to this update, TLS configuration was not generated for the vector log collector forwarding logs to a Kafka destination, if the destination Kafka broker URLs were specified only as the kafka/brokers list, without specifying the 'url' field of the output. After this update, TLS configuration is generated in the above situation.
      Show
      Prior to this update, TLS configuration was not generated for the vector log collector forwarding logs to a Kafka destination, if the destination Kafka broker URLs were specified only as the kafka/brokers list, without specifying the 'url' field of the output. After this update, TLS configuration is generated in the above situation.
    • Bug Fix
    • Hide

      1) Deploy CLO 5.6.4 version.
      2) Create a CR resource using Vector as collector.
      3) Create a secret and a CLF instance with multiple Kafka brokers:

      spec:
        outputs:
        - kafka:
            brokers:
            - tls://xxxx1:9092/
            - tls://xxxx2:9092/
            - tls://xxxx3:9092/
            topic: jb-openshift
          name: out-audit-logs
          secret:
            name: jn-audit-kafka
          type: kafka
        pipelines:
        - inputRefs:
          - application
          name: kafka
          outputRefs:
          - out-audit-logs
      

      4) Access a Vector pod and check the Kafka config section in vector.toml file (ca_file is not defined)

      # Kafka config
      [sinks.out_audit_logs]
      type = "kafka"
      inputs = ["kafka"]
      bootstrap_servers = ",xxxx1:9092,xxxx2:9092,xxxx3:9092"
      topic = "jb-openshift"
      
      [sinks.prometheus_output.tls]
      enabled = true
      key_file = "/etc/collector/metrics/tls.key"
      crt_file = "/etc/collector/metrics/tls.crt"
      

      Nevertheless, if we do the same process for a single Kafka broker, we can see the configuration in vector.toml file:

      spec:
        outputs:
        - name: out-audit-logs
          secret:
            name: jn-audit-kafka
          type: kafka
          url: tls://xxxx:9093/app-topic
        pipelines:
        - inputRefs:
          - application
          name: kafka
          outputRefs:
          - out-audit-logs
      
      [sinks.out_audit_logs.tls]
      enabled = true
      key_file = "/var/run/ocp-collector/secrets/jn-audit-kafka/tls.key"
      crt_file = "/var/run/ocp-collector/secrets/jn-audit-kafka/tls.crt"
      ca_file = "/var/run/ocp-collector/secrets/jn-audit-kafka/ca-bundle.crt"
      
      Show
      1) Deploy CLO 5.6.4 version. 2) Create a CR resource using Vector as collector. 3) Create a secret and a CLF instance with multiple Kafka brokers: spec: outputs: - kafka: brokers: - tls: //xxxx1:9092/ - tls: //xxxx2:9092/ - tls: //xxxx3:9092/ topic: jb-openshift name: out-audit-logs secret: name: jn-audit-kafka type: kafka pipelines: - inputRefs: - application name: kafka outputRefs: - out-audit-logs 4) Access a Vector pod and check the Kafka config section in vector.toml file (ca_file is not defined) # Kafka config [sinks.out_audit_logs] type = "kafka" inputs = [ "kafka" ] bootstrap_servers = ",xxxx1:9092,xxxx2:9092,xxxx3:9092" topic = "jb-openshift" [sinks.prometheus_output.tls] enabled = true key_file = "/etc/collector/metrics/tls.key" crt_file = "/etc/collector/metrics/tls.crt" Nevertheless, if we do the same process for a single Kafka broker, we can see the configuration in vector.toml file: spec: outputs: - name: out-audit-logs secret: name: jn-audit-kafka type: kafka url: tls: //xxxx:9093/app-topic pipelines: - inputRefs: - application name: kafka outputRefs: - out-audit-logs [sinks.out_audit_logs.tls] enabled = true key_file = "/ var /run/ocp-collector/secrets/jn-audit-kafka/tls.key" crt_file = "/ var /run/ocp-collector/secrets/jn-audit-kafka/tls.crt" ca_file = "/ var /run/ocp-collector/secrets/jn-audit-kafka/ca-bundle.crt"
    • Log Collection - Sprint 235, Log Collection - Sprint 236
    • Moderate

      Description of problem:

      After creating a ClusterLogForwarder instance with multiple Kafka brokers and TLS, we can observe that this configuration is not fully created in vector.toml file

      In vector logs, we can see the following error:

      ERROR rdkafka::client: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): xxxxxxx:9092/bootstrap: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 2ms in state APIVERSION_QUERY, 1 identical error(s) suppressed)
      

      Version-Release number of selected component (if applicable):

      cluster-logging.v5.6.4

      Actual results:

      All TLS configuration is not added in Vector

      Expected results:

      Correct TLS configuration in Vector

              syedriko_sub@redhat.com Sergey Yedrikov
              acandelp Adrian Candel
              Anping Li Anping Li
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: