-
Enhancement
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
-
False
Relates to this discussion here: zulip
Report
Our DBA recently ran the below query to create an historical extract. They insert all the relevant rows to a table and then they create a flat file from the resulting table
CREATE TABLE mms.branch_sku_day_27102025 TABLESPACE INDEX_MMS NOLOGGING AS SELECT * FROM mms.branch_sku_day WHERE last_update_date >= TO_DATE('2025-08-18', 'YYYY-MM-DD');
This is a very large table with around 28million rows per partition (partitioned per day). Once they realised how much data was being processed by this they killed the process from running. We're uncertain how much data this generated but it only ran for about 6 minutes.
On the DBZ we ended up crashing due to ORA-00310 which is a retriable error (this happens to us sometimes when there are many switches in quick succession). After the restart, queries start taking very long to run.
What Debezium connector do you use and what version?
3.2.0.Final with Debezium Oracle Connector
What is the connector configuration?
These are our batch settings:
debezium.source.log.mining.batch.size.default=100000
debezium.source.log.mining.batch.size.min=10000
debezium.source.log.mining.batch.size.max=750000
debezium.source.log.mining.sleep.time.default.ms=200
debezium.source.log.mining.sleep.time.min.ms=0
debezium.source.log.mining.sleep.time.max.ms=1000
debezium.source.log.mining.sleep.time.increment.ms=100
debezium.source.log.mining.strategy=online_catalog
debezium.source.log.mining.query.filter.mode=in
debezium.source.log.mining.transaction.retention.ms=43200000
debezium.source.query.fetch.size=100000
What is the captured database version and mode of deployment?
Oracle 19c with Debezium Server. Logs are each 9gb
The hope is that we can compute the data mining range in a different way to better handle this burst of redo logs. Once we get past this transaction in the logs DBZ's behaviour seems more normal even though we remain many logs behind.