Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
None
-
False
Description
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
2.5.1.Final running in DebeziumEngine
What is the connector configuration?
// code placeholder 2024-01-31 07:27:09 source > access.control.allow.methods = 2024-01-31 07:27:09 source > access.control.allow.origin = 2024-01-31 07:27:09 source > admin.listeners = null 2024-01-31 07:27:09 source > auto.include.jmx.reporter = true 2024-01-31 07:27:09 source > bootstrap.servers = [localhost:9092] 2024-01-31 07:27:09 source > client.dns.lookup = use_all_dns_ips 2024-01-31 07:27:09 source > config.providers = [] 2024-01-31 07:27:09 source > connector.client.config.override.policy = All 2024-01-31 07:27:09 source > header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter 2024-01-31 07:27:09 source > key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-01-31 07:27:09 source > listeners = [http://:8083] 2024-01-31 07:27:09 source > metric.reporters = [] 2024-01-31 07:27:09 source > metrics.num.samples = 2 2024-01-31 07:27:09 source > metrics.recording.level = INFO 2024-01-31 07:27:09 source > metrics.sample.window.ms = 30000 2024-01-31 07:27:09 source > offset.flush.interval.ms = 1000 2024-01-31 07:27:09 source > offset.flush.timeout.ms = 5000 2024-01-31 07:27:09 source > offset.storage.file.filename = /tmp/cdc-state-offset6773325094166695702/offset.dat 2024-01-31 07:27:09 source > offset.storage.partitions = null 2024-01-31 07:27:09 source > offset.storage.replication.factor = null 2024-01-31 07:27:09 source > offset.storage.topic = 2024-01-31 07:27:09 source > plugin.path = null 2024-01-31 07:27:09 source > response.http.headers.config = 2024-01-31 07:27:09 source > rest.advertised.host.name = null 2024-01-31 07:27:09 source > rest.advertised.listener = null 2024-01-31 07:27:09 source > rest.advertised.port = null 2024-01-31 07:27:09 source > rest.extension.classes = [] 2024-01-31 07:27:09 source > ssl.cipher.suites = null 2024-01-31 07:27:09 source > ssl.client.auth = none 2024-01-31 07:27:09 source > ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-01-31 07:27:09 source > ssl.endpoint.identification.algorithm = https 2024-01-31 07:27:09 source > ssl.engine.factory.class = null 2024-01-31 07:27:09 source > ssl.key.password = null 2024-01-31 07:27:09 source > ssl.keymanager.algorithm = SunX509 2024-01-31 07:27:09 source > ssl.keystore.certificate.chain = null 2024-01-31 07:27:09 source > ssl.keystore.key = null 2024-01-31 07:27:09 source > ssl.keystore.location = null 2024-01-31 07:27:09 source > ssl.keystore.password = null 2024-01-31 07:27:09 source > ssl.keystore.type = JKS 2024-01-31 07:27:09 source > ssl.protocol = TLSv1.3 2024-01-31 07:27:09 source > ssl.provider = null 2024-01-31 07:27:09 source > ssl.secure.random.implementation = null 2024-01-31 07:27:09 source > ssl.trustmanager.algorithm = PKIX 2024-01-31 07:27:09 source > ssl.truststore.certificates = null 2024-01-31 07:27:09 source > ssl.truststore.location = null 2024-01-31 07:27:09 source > ssl.truststore.password = null 2024-01-31 07:27:09 source > ssl.truststore.type = JKS 2024-01-31 07:27:09 source > task.shutdown.graceful.timeout.ms = 5000 2024-01-31 07:27:09 source > topic.creation.enable = true 2024-01-31 07:27:09 source > topic.tracking.allow.reset = true 2024-01-31 07:27:09 source > topic.tracking.enable = true 2024-01-31 07:27:09 source > value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - connector.class = io.debezium.connector.mongodb.MongoDbConnector 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - collection.include.list = ****** 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - max.queue.size = 8192 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.connection.mode = replica_set 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.password = ******** 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.connection.string = ******** 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - capture.mode = change_streams_update_full_with_pre_image 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.ssl.enabled = true 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - tombstones.on.delete = false 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - topic.prefix = ****** 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - offset.storage.file.filename = /tmp/cdc-state-offset6773325094166695702/offset.dat 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - decimal.handling.mode = string 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.task.id = 0 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - internal.mongodb.internal.task.connection.strings = mongodb+srv://*****/?retryWrites=false&provider=airbyte&tls=true 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.authsource = admin 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - errors.retry.delay.initial.ms = 299 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - value.converter = org.apache.kafka.connect.json.JsonConverter 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - key.converter = org.apache.kafka.connect.json.JsonConverter 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - offset.storage = org.apache.kafka.connect.storage.FileOffsetBackingStore 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - max.queue.size.in.bytes = 268435456 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - mongodb.user = AIRBYTE_USER 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - errors.retry.delay.max.ms = 300 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - offset.flush.timeout.ms = 5000 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - heartbeat.interval.ms = 10000 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - offset.flush.interval.ms = 1000 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - key.converter.schemas.enable = false 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - errors.max.retries = 0 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - name = **** 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - value.converter.schemas.enable = false 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - max.batch.size = 2048 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - snapshot.mode = never 2024-01-31 07:27:10 source > 2024-01-31 07:27:10 INFO i.d.c.c.BaseSourceTask(lambda$start$0):138 - database.include.list = *****
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
Atlas MongoDB
What behaviour do you expect?
Because heartbeat.interval.ms = 10000, we expect to receive a heartbeat every 10 seconds.
What behaviour do you see?
A single or maximum two heartbeats are sent seemingly randomly.
As a result a long period of wait we have no way of telling if we should wrap up our read.
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Yes. tested with latest 2.5.1.Final
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
<Your answer>
How to reproduce the issue using our tutorial deployment?
<Your answer>
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>