Details
-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
2.0.0.Alpha1
-
None
-
False
-
None
-
False
Description
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
Mongo Connector - Version: 2.0.0.Alpha1
What is the connector configuration?
{ "collection.include.list": "...", "connector.class": "io.debezium.connector.mongodb.MongoDbConnector", "consumer.override.max.partition.fetch.bytes": "67108864", "database.include.list": "...", "errors.log.enable": "true", "errors.log.include.messages": "true", "errors.tolerance": "all", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "mongodb.hosts": "...", "mongodb.name": "...", "mongodb.password": "...", "mongodb.ssl.enabled": "true", "mongodb.ssl.invalid.hostname.allowed": "true", "mongodb.user": "...", "name": "Mongo-Source", "producer.override.max.request.size": "67108864", "snapshot.fetch.size": "100", "topic.creation.default.partitions": "10", "topic.creation.default.replication.factor": "-1", "value.converter": "org.apache.kafka.connect.json.JsonConverter" }
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
Mongo Atlas - MongoDB 4.2.21 Enterprise
What behaviour do you expect?
It should be able to read the change stream event with a size of more than 16 MB.
https://www.mongodb.com/docs/manual/administration/change-streams-production-recommendations/
What behaviour do you see?
It is failing with an error. Could not proceed further.
Do you see the same behaviour using the latest relesead Debezium version?
We are using the latest Alpha.
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped. at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.streamChangesForReplicaSet(MongoDbStreamingChangeEventSource.java:134) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.execute(MongoDbStreamingChangeEventSource.java:103) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.execute(MongoDbStreamingChangeEventSource.java:59) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.connect.errors.ConnectException: Error while attempting to read from oplog on '...:27017,...:27017,...:27017' at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.lambda$establishConnectionToPrimary$3(MongoDbStreamingChangeEventSource.java:182) at io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.execute(ConnectionContext.java:292) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.streamChangesForReplicaSet(MongoDbStreamingChangeEventSource.java:122) ... 10 more Caused by: com.mongodb.MongoQueryException: Query failed with error code 10334 and error message 'Executor error during getMore :: caused by :: BSONObj size: 27451666 (0x1A2E112) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { _data: "8262BD91F1000000A52B022C0100296E5A100473A5DEC1409E46D29394F55BA1C4672146645F696400645FA049D38A5DA5166C43132D0004" }' on server ...:27017 at com.mongodb.internal.operation.QueryHelper.translateCommandException(QueryHelper.java:29) at com.mongodb.internal.operation.QueryBatchCursor.lambda$getMore$1(QueryBatchCursor.java:282) at com.mongodb.internal.operation.QueryBatchCursor$ResourceManager.executeWithConnection(QueryBatchCursor.java:512) at com.mongodb.internal.operation.QueryBatchCursor.getMore(QueryBatchCursor.java:270) at com.mongodb.internal.operation.QueryBatchCursor.tryHasNext(QueryBatchCursor.java:223) at com.mongodb.internal.operation.QueryBatchCursor.lambda$tryNext$0(QueryBatchCursor.java:206) at com.mongodb.internal.operation.QueryBatchCursor$ResourceManager.execute(QueryBatchCursor.java:397) at com.mongodb.internal.operation.QueryBatchCursor.tryNext(QueryBatchCursor.java:205) at com.mongodb.internal.operation.ChangeStreamBatchCursor$3.apply(ChangeStreamBatchCursor.java:102) at com.mongodb.internal.operation.ChangeStreamBatchCursor$3.apply(ChangeStreamBatchCursor.java:98) at com.mongodb.internal.operation.ChangeStreamBatchCursor.resumeableOperation(ChangeStreamBatchCursor.java:195) at com.mongodb.internal.operation.ChangeStreamBatchCursor.tryNext(ChangeStreamBatchCursor.java:98) at com.mongodb.client.internal.MongoChangeStreamCursorImpl.tryNext(MongoChangeStreamCursorImpl.java:78) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.readChangeStream(MongoDbStreamingChangeEventSource.java:340) at io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource.lambda$streamChangesForReplicaSet$0(MongoDbStreamingChangeEventSource.java:124) at io.debezium.connector.mongodb.ConnectionContext$MongoPrimary.execute(ConnectionContext.java:288) ... 11 more
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
there is some way to ignore or log such a record (Poison pill).
Implementation ideas (optional)
Something similar like dlq where we can push bad record details or oversized document details to another topic and keep looping through other records.