Details
-
Bug
-
Resolution: Unresolved
-
Minor
-
1.5.0.Final
-
None
-
False
-
False
-
Undefined
Description
We are running debezium MYSQL connector against a MYSQL Database and producing to an AWS MSK Kafka cluster which has IAM auth enabled. When configuring the connector's SALS mechanism we get an error.
The cluster is running kafka version 2.8.0 and debezium connector is version 1.5.0.final.
Here is our `connect-distributed.properties`:
bootstrap.servers=BROKER_URLs group.id=SOME_GROUP_ID key.converter=org.apache.kafka.connect.storage.StringConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.storage.topic=criteria-test-debezium-offset offset.storage.replication.factor=3 offset.storage.partitions=50 config.storage.topic=criteria-test-debezium-config config.storage.replication.factor=3 config.storage.partitions=1 status.storage.topic=SOME_TOPIC status.storage.replication.factor=3 status.storage.partitions=10 rest.advertised.port=8083 plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, plugin.path=/usr/share/java,/usr/share/confluent-hub-components,/usr/share/connect consumer.auto.offset.reset=earliest producer.max.request.size=5000000 # AWS IAM auth security.protocol=SASL_SSL sasl.mechanism=AWS_MSK_IAM sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required; sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler producer.security.protocol=SASL_SSL producer.sasl.mechanism=AWS_MSK_IAM producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required; producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
And our `debezium-conf.json`:
{ "name": "SOME_NAME", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "SOME_HOST", "database.port": "3306", "database.user": "USER", "database.password": "PASS", "database.whitelist": "DB", "table.whitelist": "SOME_TABLES", "database.server.id": "ID", "database.server.name": "NAME", "database.history.kafka.bootstrap.servers": "BROKERS", "database.history.kafka.topic": "SOME_TOPIC", "include.schema.changes": true, "database.history.store.only.monitored.tables.ddl": true, "snapshot.mode": "schema_only", "snapshot.locking.mode": "minimal", "inconsistent.schema.handling.mode": "warn", "database.history.skip.unparseable.ddl": true, "database.history.security.protocol":"SASL_SSL", "database.history.sasl.mechanism":"AWS_MSK_IAM", "database.history.sasl.jaas.config":"software.amazon.msk.auth.iam.IAMLoginModule required;", "database.history.sasl.client.callback.handler.class":"software.amazon.msk.auth.iam.IAMClientCallbackHandler", "database.history.producer.security.protocol":"SASL_SSL", "database.history.producer.sasl.mechanism":"AWS_MSK_IAM", "database.history.producer.sasl.jaas.config":"software.amazon.msk.auth.iam.IAMLoginModule required;", "database.history.producer.sasl.client.callback.handler.class":"software.amazon.msk.auth.iam.IAMClientCallbackHandler", "database.history.consumer.security.protocol":"SASL_SSL", "database.history.consumer.sasl.mechanism":"AWS_MSK_IAM", "database.history.consumer.sasl.jaas.config":"software.amazon.msk.auth.iam.IAMLoginModule required;", "database.history.consumer.sasl.client.callback.handler.class":"software.amazon.msk.auth.iam.IAMClientCallbackHandler" } }
And the error we get:
[2021-05-20 21:12:17,859] ERROR [Consumer clientId=consumer-1, groupId=GROUP_ID] Connection to node 2 (NODE_ADDRESS) failed authentication due to: [faa50eb6-1847-4574-b711-0f424495e187]: Too many connects (org.apache.kafka.clients.NetworkClient:737)
[2021-05-20 21:12:17,860] ERROR [Worker clientId=connect-1, groupId=GROUP_ID] Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:253)
org.apache.kafka.common.errors.SaslAuthenticationException: [faa50eb6-1847-4574-b711-0f424495e187]: Too many connects
[2021-05-20 21:12:17,862] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:66)
[2021-05-20 21:12:17,865] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:241)
[2021-05-20 21:12:17,866] INFO Stopped http_8083@25d2f66{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:341)
Our setup is based on AWS's documentation: https://aws.amazon.com/blogs/big-data/securing-apache-kafka-is-easy-and-familiar-with-iam-access-control-for-amazon-msk/
And we already tried a simple consumer and producer with success.