-
Bug
-
Resolution: Done
-
Major
-
2.5.1.Final
-
None
-
False
-
None
-
False
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
debezium-connector-oracle version 2.5.1.Final
What is the connector configuration?
{ "name": "source-test-connector", "config": { "connector.class": "io.debezium.connector.oracle.OracleConnector", "tasks.max": "1", "database.hostname": "oracle", "database.port": "1521", "database.user": "c##dbzuser", "database.password": "dbz", "database.dbname": "orclcdb", "database.pdb.name": "orclpdb1", "database.connection.adapter": "logminer", "topic.prefix": "dbz", "lob.enabled": "true", "schema.name.adjustment.mode": "avro", "table.include.list": "C##DBZUSER.TEST_TABLE", "column.include.list": "C##DBZUSER.TEST_TABLE.ID,C##DBZUSER.TEST_TABLE.TEXT", "include.schema.changes": "false", "schema.history.internal.kafka.bootstrap.servers" : "kafka:9092", "schema.history.internal.kafka.topic": "schema-changes.test", "heartbeat.interval.ms": "60000", "log.mining.strategy": "online_catalog", "log.mining.query.filter.mode": "in", "custom.metric.tags": "connector=source-test-connector", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable": "true", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable": "true" } }
What is the captured database version and mode of deployment?
Oracle Database 19, Docker
What behaviour do you expect?
Oracle connector ignores reselection for excluded clob/blob columns.
What behaviour do you see?
Oracle connector does not ignore reselection for clob/blob columns if they have been excluded using column.include.list/column.exclude.list connector properties.
Do you see the same behaviour using the latest relesead Debezium version?
Yes
Do you have the connector logs, ideally from start till finish?
Caused by: org.apache.kafka.connect.errors.DataException: DATA is not a valid field name at org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254) at org.apache.kafka.connect.data.Struct.get(Struct.java:74) at io.debezium.connector.oracle.BaseChangeRecordEmitter.getReselectColumns(BaseChangeRecordEmitter.java:122) at io.debezium.connector.oracle.BaseChangeRecordEmitter.emitUpdateAsPrimaryKeyChangeRecord(BaseChangeRecordEmitter.java:77) at io.debezium.relational.RelationalChangeRecordEmitter.emitUpdateRecord(RelationalChangeRecordEmitter.java:128) at io.debezium.relational.RelationalChangeRecordEmitter.emitChangeRecords(RelationalChangeRecordEmitter.java:53) at io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:271) ... 19 more
How to reproduce the issue using our tutorial deployment?
1. Create a new table:
CREATE TABLE c##dbzuser.test_table ( id NUMBER(10) NOT NULL PRIMARY KEY, text VARCHAR2(400), data CLOB );
2. Insert a new record into the table:
INSERT INTO c##dbzuser.test_table (id, text, data) VALUES (1, 'text', TO_CLOB('data')); commit;
3. Create a new connector with excluded clob column:
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://localhost:8083/connectors -d ' { "name": "source-test-connector", "config": { "connector.class": "io.debezium.connector.oracle.OracleConnector", "tasks.max": "1", "database.hostname": "oracle", "database.port": "1521", "database.user": "c##dbzuser", "database.password": "dbz", "database.dbname": "orclcdb", "database.pdb.name": "orclpdb1", "database.connection.adapter": "logminer", "topic.prefix": "dbz", "lob.enabled": "true", "schema.name.adjustment.mode": "avro", "table.include.list": "C##DBZUSER.TEST_TABLE", "column.include.list": "C##DBZUSER.TEST_TABLE.ID,C##DBZUSER.TEST_TABLE.TEXT", "include.schema.changes": "false", "schema.history.internal.kafka.bootstrap.servers" : "kafka:9092", "schema.history.internal.kafka.topic": "schema-changes.test", "heartbeat.interval.ms": "60000", "log.mining.strategy": "online_catalog", "log.mining.query.filter.mode": "in", "custom.metric.tags": "connector=source-test-connector", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable": "true", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable": "true" } }'
4. Update an existing record in the table:
UPDATE c##dbzuser.test_table SET id = 2 WHERE id = 1; commit;
5. Check status of created connector:
ER: connector is running
AR: connector is failed with error: org.apache.kafka.connect.errors.DataException: DATA is not a valid field name
Feature request or enhancement
<Your answer>
Implementation ideas (optional)
Exclude clob/blob columns from reselection if they have been excluded.
- links to
-
RHEA-2024:129636 Red Hat build of Debezium 2.5.4 release