Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
None
-
False
Description
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
During incremental snapshot if incremental.snapshot.chunk.size is less than size of tracked table and primary key (composite or single) contains VARCHAR2 column, debezium can't calculate start position of next chunk. In this case, debezium adds to kafka topic only data, returned from first chunk and stops incremental snapshot as all data was snapshotted. Thus such tables could be migrated during initial snapshot or if incremental.snapshot.chunk.size is greater than number of rows in table.
What Debezium connector do you use and what version?
Oracle debezium connector, version 1.9.2.
I've tested same scenario with 2.0.1.Final version and got same error.
What is the connector configuration?
{ "name": "migrationOracleScore", "database.hostname": "<DB_HOST>", "database.port": "<DB_PORT>", "tasks.max": "1", "connector.class": "io.debezium.connector.oracle.OracleConnector", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable": "true", "key.converter.schemas.enable": "true", "database.user": "<USERNAME>", "database.password": "<PASSSWORD>", "database.db.name": "<DB_NAME>", "database.server.name": "my-migration", "database.url": "<DB_URL>", "schema.include.list": "MIGRATION,TEST_SCHEMA", "table.exclude.list": "MIGRATION\\.(?!DEBEZIUM_SIGNAL).+", "poll.interval.ms": "60000", "incremental.snapshot.chunk.size": "100", "database.history.kafka.bootstrap.servers": "<BOOTSTRAP_SERVERS>", "database.history.kafka.topic": "my-migration-history-topic", "database.history.skip.unparseable.ddl": "true", "signal.data.collection": "<DB_NAME>.MIGRATION.DEBEZIUM_SIGNAL", "time.precision.mode": "connect", "converters": "boolean", "boolean.type": "io.debezium.connector.oracle.converters.NumberOneToBooleanConverter", "boolean.selector": ".*\\.(IS_.*|HAS_.*|SHOULD_.*|IN_.*|OUT.*|INCLUDE_.*)", "lob.enabled": "true", "snapshot.mode": "schema_only", "decimal.handling.mode": "precise", "transforms": "filterSignalTable,unwrap", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState", "transforms.unwrap.drop.tombstones": "false", "transforms.unwrap.add.headers": "scn,commit_scn", "transforms.unwrap.delete.handling.mode": "drop", "transforms.filterSignalTable.type": "org.apache.kafka.connect.transforms.Filter", "transforms.filterSignalTable.predicate": "signalTopicPredicate", "predicates": "signalTopicPredicate", "predicates.signalTopicPredicate.type": "org.apache.kafka.connect.transforms.predicates.TopicNameMatches", "predicates.signalTopicPredicate.pattern": "my-migration\\.MIGRATION\\.DEBEZIUM_SIGNAL" }
What is the captured database version and mode of deployment?
(E.g. on-premises, with a specific cloud provider, etc.)
Deployment on single kafka
What behavior do you expect?
Incremental snapshot should continue from the row, where previous chunk finished.
What behavior do you see?
Only rows, returned from first chunk are sent to kafka. After that debezium stops incremental snapshot.
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
<Your answer>
Do you have the connector logs, ideally from start till finish?
INFO Requested 'INCREMENTAL' snapshot of data collections '[<TEST_TABLE_NAME>]' (io.debezium.pipeline.signal.ExecuteSnapshot:54) [2022-12-15 12:52:00,024] INFO Incremental snapshot for table '<TEST_TABLE_NAME>' will end at position [999976FFE93B4AF4B9151E366A263FFD] (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource: 309) [2022-12-15 12:52:04,599] INFO 1230 records sent during previous 00:02:15.341, last recorded offset: {commit_scn=19495916240006, transaction_id=null, incremental_snapshot_maximum_key=aced0005757200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c0 200007870000000017400203939393937364646453933423441463442393135314533363641323633464644, snapshot_scn=19495916214009, incremental_snapshot_collections=<TEST_TABLE_NAME>, incremental_snapshot_primary_key=aced0005757200135b4c6a6176612e 6c616e672e4f626a6563743b90ce589f1073296c0200007870000000017400204142343830334431463838303442413738443443373030314237363437314543, scn=19495916240005} (io.debezium.connector.common.BaseSourceTask:182) [2022-12-15 12:52:04,920] WARN [Producer clientId=connector-producer-astreya-migration-oracle-source-0] Error while fetching metadata with correlation id 24 : {my-migration.<TEST_TABLE_NAME>=LEADER_NOT_AVAILABLE} (org.apache.kafka.client s.NetworkClient:1100) [2022-12-15 12:52:06,914] INFO No data returned by the query, incremental snapshotting of table '<TEST_TABLE_NAME>' finished (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:315) [2022-12-15 12:52:06,931] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:956) [2022-12-15 12:52:10,324] INFO Skipping read chunk because snapshot is not running (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:272)
How to reproduce the issue using our tutorial deployment?
1) Create in oracle table with VARCHAR2 as primary key
2) Set incremental.snapshot.chunk.size less than number of rows in table
3) Perform incremental snapshot via insert in debezium_signal table
4) Check that only incremental.snapshot.chunk.size messages appear in relevant topic
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>