-
Bug
-
Resolution: Done
-
Major
-
3.2.0.Final
-
None
-
False
-
-
False
Bug report
What Debezium connector do you use and what version?
MongoDB connector version 3.2
What is the connector configuration?
name = mongo-users connector.class = io.debezium.connector.mongodb.MongoDbConnector tasks.max = 1 topic.prefix = data mongodb.connection.string = mongodb://mongo:27017?authSource=admin&replicaSet=rs0 mongodb.ssl.enabled = false filters.match.mode = literal collection.include.list = test.users capture.mode = change_streams_update_full transforms = unwrap transforms.unwrap.type = io.debezium.connector.mongodb.transforms.ExtractNewDocumentState transforms.unwrap.array.encoding = document transforms.unwrap.delete.tombstone.handling.mode = rewrite transforms.unwrap.add.fields = op,ts_ms
What is the captured database version and mode of deployment?
MongoDB 8.0 in Docker. Same behavior observed on Atlas.
What behavior do you expect?
Connector does not crash when processing documents with nested structs in arrays of structs.
What behavior do you see?
Connector crash when processing documents with nested structs in array of structs like this one:
{ foo: [{ bar: { baz: 'whatever' } }] }
Do you see the same behaviour using the latest released Debezium version?
Yes, the bug was introduced in the latest 3.2 release. It does not exist in 3.1.3.
Do you have the connector logs, ideally from start till finish?
Yes
Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:212)org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:230) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:156) at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:54) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.sendRecords(AbstractWorkerSourceTask.java:401) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:367) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:77) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: java.lang.ClassCastException: class java.util.LinkedHashMap cannot be cast to class org.bson.BsonValue (java.util.LinkedHashMap is in module java.base of loader 'bootstrap'; org.bson.BsonValue is in unnamed module of loader org.apache.kafka.connect.runtime.isolation.PluginClassLoader @b7f23d9) at io.debezium.connector.mongodb.transforms.MongoDataConverter.k(MongoDataConverter.java:400) at io.debezium.connector.mongodb.transforms.MongoDataConverter.schema(MongoDataConverter.java:348) at io.debezium.connector.mongodb.transforms.MongoDataConverter.buildSchema(MongoDataConverter.java:212) at io.debezium.connector.mongodb.transforms.ExtractNewDocumentState.newRecord(ExtractNewDocumentState.java:281) at io.debezium.connector.mongodb.transforms.ExtractNewDocumentState.doApply(ExtractNewDocumentState.java:234) at io.debezium.transforms.AbstractExtractNewRecordState.apply(AbstractExtractNewRecordState.java:104) at org.apache.kafka.connect.runtime.TransformationStage.apply(TransformationStage.java:57) at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:54) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
How to reproduce the issue using our tutorial deployment?
Not your tutorial deployment since it doesn't work for me (mongo primary is never elected), but I created a small reproduction here with all the instructions in the README: