-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
2.7.1.Final
-
None
-
False
-
-
False
-
Important
Bug report
What Debezium connector do you use and what version?
Embedded connector 2.7.0
What is the connector configuration?
with("database.hostname", defaultDebeziumProperties.database.hostname)
.with("database.port", defaultDebeziumProperties.database.port)
.with("database.user", defaultDebeziumProperties.database.user)
.with("database.password", defaultDebeziumProperties.database.password)
.with("database.dbname", defaultDebeziumProperties.database.dbname)
.with("table.include.list", defaultDebeziumProperties.tables)
.with("schema.history.internal.store.only.captured.tables.ddl", true)
.with("internal.log.mining.read.only", true)
.with("log.mining.archive.log.only.mode", true)
.with("log.mining.strategy", "online_catalog")
.with("log.mining.query.filter.mode", "in")
.with("log.mining.batch.size.min", 50000)
.with("log.mining.batch.size.max", 2000000)
.with("log.mining.batch.size.default", 500000)
.with("log.mining.sleep.time.min.ms", 1000)
.with("log.mining.sleep.time.max.ms", 3000)
.with("log.mining.sleep.time.default.ms", 1500)
.with("event.processing.failure.handling.mode", "warn")
.with("poll.interval.ms", 5000)
.with("max.queue.size", 49152)
.with("max.batch.size", 16384)
.with("query.fetch.size", 50000)
.with("errors.max.retries", 0)
.with("schema.history.internal.store.only.captured.databases.ddl", true)
What is the captured database version and mode of deployment?
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.18.0.0.0 deployed on OCI
What behavior do you expect?
Batch size with a regular size to catch up to date messages.
What behavior do you see?
The connector has fallen behind an it's using the current SCN, so previous message where lost.
Do you see the same behavior using the latest released Debezium version?
I'm currently using the 2.7 latest version
Do you have the connector logs, ideally from start till finish?
{"time":"2024-07-19T09:14:39.294-03:00","message":"Oracle Session UGA 2.52MB (max = 13.53MB), PGA 554.85MB (max = 570.79MB)","logger_name":"io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource","thread_name":"debezium-oracleconnector-mfi.debezium-change-event-source-coordinator","level":"DEBUG","dbz.connectorName":"mfi.debezium","dbz.databaseName":"DATABASE","dbz.connectorType":"Oracle","dbz.taskId":"0","dbz.connectorContext":"streaming"} {"time":"2024-07-18T13:04:44.304-03:00","message":"Max batch size too small, using current SCN 161955624544 as end SCN.","logger_name":"io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource","thread_name":"debezium-oracleconnector-mfi.debezium-change-event-source-coordinator","level":"DEBUG","dbz.connectorName":"mfi.debezium","dbz.databaseName":"DATABASE","dbz.connectorType":"Oracle","dbz.taskId":"0","dbz.connectorContext":"streaming"}
How to reproduce the issue using our tutorial deployment?
Sent 100.000 new data to the table from time to time, after a while (20 mins) it start to fall behind