-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
3.1.1.Final
-
None
-
False
-
-
False
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
There was an Oracle Patching that happened in our 2 RAC node cluster after upgrading to Debezium 3.1.1 and we noticed that every time Node 1 or Node 2 (one node) was down, Debezium failed as it could not find SCN. We see the following errors
io.debezium.DebeziumException: Redo Thread 1 stopped at SCN 21463124507, but logs detected using SCN 21463125157.
at io.debezium.connector.oracle.logminer.LogFileCollector.logException(LogFileCollector.java:567)
at io.debezium.connector.oracle.logminer.LogFileCollector.isClosedThreadConsistent(LogFileCollector.java:358)
*The above error shows up in logs 6 times*
Caused by: io.debezium.connector.oracle.logminer.LogFileNotFoundException: None of the log files contain offset SCN: 21463125157, re-snapshot is required.
But later on , when we restart it works fine, even without a resnapshotting. We just restart the connector and it continues where it left off.
What Debezium connector do you use and what version?
implementation 'io.debezium:debezium-api:3.1.1.Final'
implementation'io.debezium:debezium-embedded:3.1.1.Final'
implementation 'org.apache.kafka:connect-runtime:3.8.0'
implementation 'io.debezium:debezium-connector-oracle:3.1.1.Final'
implementation "com.oracle.database.jdbc:ojdbc11:23.5.0.24.07"
implementation 'io.debezium:debezium-storage-kafka:3.1.1.Final'
What is the connector configuration?
decimalHandlingMode: string name: <<NAME>> class: io.debezium.connector.oracle.OracleConnector offsetTopic: <<OFFSETTOPICNAME>> offsetTopicNumberOfPartitions: 1 offsetTopicReplicationFactor: 1 heartbeatIntervalInMsecs: 100000 heartbeatQuery: SELECT * FROM MYTABLE WHERE ROWNUM <= 1 database.query.timeout.ms: 600000 .with("name", wmsDbzConnectorName) .with("connector.class", wmsDbzConnectorClass) .with("offset.storage", "org.apache.kafka.connect.storage.KafkaOffsetBackingStore") .with("offset.storage.topic", debeziumConnectorOffsetTopic) .with(DistributedConfig.OFFSET_STORAGE_PARTITIONS_CONFIG, debeziumConnectorOffsetTopicPartitions) .with(DistributedConfig.OFFSET_STORAGE_REPLICATION_FACTOR_CONFIG, debeziumConnectorOffsetTopicReplicationFactor) .with("offset.flush.interval.ms", "60000") .with("database.hostname", wmsDbHost) .with("database.port", port) .with("database.user", username) .with("database.password", password) .with("database.dbname", dbName) .with("schema.include.list", schemaList) .with("table.include.list", tableList) .with("include.schema.changes", "false") .with("topic.prefix", topicPrefix) .with("database.server.name", dbserverName) .with("snapshot.mode", snapshotMode) // It is set as initial .with("converter.schemas.enable", "false") .with("decimal.handling.mode", decimalHandlingMode) .with("heartbeat.interval.ms", heartbeatInterval) .with("heartbeat.action.query", heartbeatActionQuery) .with("database.query.timeout", 600000) .with("schema.history.internal.kafka.topic", schemaTopic) .with("schema.history.internal.kafka.bootstrap.servers", schemaBootstrapServers) .with("schema.history.internal.consumer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.consumer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.consumer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.consumer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.consumer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.consumer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.consumer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.consumer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("schema.history.internal.producer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.producer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.producer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.producer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.producer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.producer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.producer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.producer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("bootstrap.servers", schemaBootstrapServers) .with("security.protocol", schemaSecurityProtocol) .with("ssl.keystore.location", schemaSslKeystoreLocation) .with("ssl.keystore.password", schemaSslKeystorePassword) .with("ssl.truststore.location", schemaSslTruststoreLocation) .with("ssl.truststore.password", schemaSslTruststorePassword) .with("ssl.endpoint.identification.algorithm", sslEndpointAlgorithm)
What is the captured database version and mode of deployment?
(E.g. on-premises, with a specific cloud provider, etc.)
Data base in running on a data center and is an Oracle database of version 19c.
DB Server on Dev Database runs on a 2 node RAC. The Archived log files (which Log Miner reads from) are shared to both nodes in RAC as a shared path.
What behavior do you expect?
We were expecting Debezium connector to continue with one Node alone even if a node is down.
What behavior do you see?
As mentioned about in the Bug report. It fails after sometime and gives up complaining that Log file is not found (Maybe since we did not set these values and takes the default ccranfor@redhat.com ?)
.with(OracleConnectorConfig.LOG_MINING_LOG_QUERY_MAX_RETRIES.name(), 25)
.with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_INITIAL_DELAY_MS.name(), 1000)
.with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_MAX_DELAY_MS.name(), 60000)
Do you see the same behaviour using the latest released Debezium version?
So far, we tried 3.1.1 only. In lower environments it worked (data load is small there). But in high hit environment, it failed.
Do you have the connector logs, ideally from start till finish?
Yes
How to reproduce the issue using our tutorial deployment?
Unknown
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
Continous flow of Debezium events without interruption
Implementation ideas (optional)
Probably this will help ?
.with(OracleConnectorConfig.LOG_MINING_LOG_QUERY_MAX_RETRIES.name(), 25)
.with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_INITIAL_DELAY_MS.name(), 1000)
.with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_MAX_DELAY_MS.name(), 60000)
But ccranfor@redhat.com . As per the blog you put at https://debezium.io/blog/2025/07/16/oracle-does-not-contain-scn/, the property name has prefix internal, but in the 3.1.1 version we are using in the above constant, prefix internal is not there . Can you clarify this part ?
Will it work putting those 3 properties in higher environment to get it to work ?
- clones
-
DBZ-9021 Debezium Oracle Connector stopped with ORA-00604: error and never recovered
-
- Closed
-