-
Bug
-
Resolution: Done
-
Blocker
-
Logging 6.0.0
-
True
-
None
-
False
-
NEW
-
OBSDA-550 - Updated APIs for Logging 6.0
-
VERIFIED
-
Release Note Not Required
-
-
-
Log Collection - Sprint 257
-
Critical
Description of problem:
When authentication.sasl is specified in CLF, kafka.sasl.enabled is not in vector.toml. collector pods raise error below
#collector-psb86.logs
2024-07-30T08:43:11.200564Z ERROR sink{component_kind="sink" component_id=output_amq_kafka component_type=kafka}: vector_common::internal_event::component_events_dropped: Events dropped intentional=false count=1 reason="Service call failed. No retries or retries exhausted." internal_log_rate_limit=true 2024-07-30T08:43:11.200588Z ERROR sink{component_kind="sink" component_id=output_amq_kafka component_type=kafka}: vector_common::internal_event::service: Internal log [Service call failed. No retries or retries exhausted.] is being suppressed to avoid flooding. 2024-07-30T08:43:11.200599Z ERROR sink{component_kind="sink" component_id=output_amq_kafka component_type=kafka}: vector_common::internal_event::component_events_dropped: Internal log [Events dropped] is being suppressed to avoid flooding. 2024-07-30T08:43:11.200885Z ERROR rdkafka::client: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down 2024-07-30T08:43:21.201289Z WARN librdkafka: librdkafka: FAIL [thrd:my-cluster-kafka-bootstrap.amq-aosqe.svc:9092/bootstrap]: my-cluster-kafka-bootstrap.amq-aosqe.svc:9092/bootstrap: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 234ms in state UP, 3 identical error(s) suppressed) 2024-07-30T08:43:21.201887Z ERROR rdkafka::client: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): my-cluster-kafka-bootstrap.amq-aosqe.svc:9092/bootstrap: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 234ms in state UP, 3 identical error(s) suppressed)
How reproducible:
always
Steps to Reproduce:
- Forward logs to kafka
apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector spec: managementState: Managed outputs: - kafka: authentication: sasl: mechanism: SCRAM-SHA-512 password: key: username secretName: kafka-user username: key: password secretName: my-user url: tcp://my-cluster-kafka-bootstrap.amq-aosqe.svc:9091/topic-logging-app tuning: compression: snappy delivery: atLeastOnce name: amq-kafka type: kafka pipelines: - inputRefs: - application - audit - infrastructure name: pipe1 outputRefs: - amq-kafka serviceAccount: name: logcollector
- Check the collector pod logs and vector.toml
Actual results:
Logs can not be forward to kafka.
vector.toml
[sinks.output_amq_kafka.sasl]
username = "SECRET[kubernetes_secret.kafka-user/username]"
password = "SECRET[kubernetes_secret.my-user/password]"
mechanism = "SCRAM-SHA-512"
Expected results:
[sinks.output_amq_kafka.sasl]
enabled = true
username = "SECRET[kubernetes_secret.kafka-user/username]"
password = "SECRET[kubernetes_secret.my-user/password]"
mechanism = "SCRAM-SHA-512"
Additional info:
- relates to
-
LOG-5112 BrokerTransportFailure Error for Kafka brokers when using Vector as collector
- Closed
- links to
- mentioned on