-
Bug
-
Resolution: Done
-
Major
-
2.5.4.Final, 2.6.2.Final
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
Debezium Connector for Oracle - version 2.5.4.Final
Running on Strimzi / OpenShift
What is the connector configuration?
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-oracle-debezium-connector labels: strimzi.io/cluster: connect-cluster spec: class: io.debezium.connector.oracle.OracleConnector tasksMax: 1 autoRestart: enabled: false maxRestarts: 5 config: key.converter: io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url: https://schema-registry-url/ value.converter: io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url: https://schema-registry-url/ database.user: ${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbUsername} database.password: ${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbPassword} database.url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=DB_HOSTNAME)(PORT=1521)))(CONNECT_DATA=(SID=ASYRHE01))) database.dbname: ASYRHE01 topic.prefix: ch-syrius-uk-streams-api.cdc schema.include.list: HEVPROD table.include.list: HEVPROD.TABLE1,HEVPROD.TABLE2,HEVPROD.TABLE3 snapshot.max.threads: 10 time.precision.mode: connect decimal.handling.mode: string schema.name.adjustment.mode: avro field.name.adjustment.mode: avro log.mining.batch.size.default: 100000 log.mining.batch.size.min: 10000 log.mining.batch.size.max: 200000 heartbeat.interval.ms: 300000 notification.enabled.channels: log,sink notification.sink.topic.name: __debezium-notification.ch-syrius-uk-streams-api.cdc schema.history.internal.kafka.bootstrap.servers: confluent-kafka-bootstrap-server schema.history.internal.kafka.topic: ch-syrius-uk-streams-api.cdc.database-schema-history schema.history.internal.store.only.captured.databases.ddl: true schema.history.internal.store.only.captured.tables.ddl: true schema.history.internal.producer.security.protocol: SASL_SSL schema.history.internal.producer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG} schema.history.internal.producer.sasl.mechanism: PLAIN schema.history.internal.consumer.security.protocol: SASL_SSL schema.history.internal.consumer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG} schema.history.internal.consumer.sasl.mechanism: PLAIN signal.enabled.channels: source,kafka signal.data.collection: '"ASYRHE01"."DBZ"."__debezium_signal"' signal.kafka.bootstrap.servers: confluent-kafka-bootstrap-server signal.kafka.consumer.offset.commit.enabled: true signal.kafka.groupId: connect-ch-syrius.cdc-signal signal.kafka.topic: __debezium-signaling.ch-syrius-uk-streams-api.cdc signal.consumer.security.protocol: SASL_SSL signal.consumer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG} signal.consumer.sasl.mechanism: PLAIN signal.consumer.key.converter: org.apache.kafka.connect.storage.StringConverter signal.consumer.value.converter: org.apache.kafka.connect.json.JsonConverter incremental.snapshot.watermarking.strategy: insert_delete transforms: Filter transforms.Filter.type: org.apache.kafka.connect.transforms.Filter transforms.Filter.predicate: IsSignal predicates: IsSignal predicates.IsSignal.type: org.apache.kafka.connect.transforms.predicates.TopicNameMatches predicates.IsSignal.pattern: .*\.__debezium_signal
What is the captured database version and mode of deployment?
(E.g. on-premises, with a specific cloud provider, etc.)
DB running on Oracle Database 19c Enterprise Edition
Deployment: SaaS solution hosted on Swisscom infrastructure
What behaviour do you expect?
When creating an execute-snapshot signaling message using either the source or kafka channels, the ad-hoc incremental snapshot should be executed.
What behaviour do you see?
Log signal messages both created from the source & kafka channels are working. However ad-hoc snapshot signals don't work. This is probably because of the table name we have in our setup (DBZ._debezium_signal).{} The INSERT SQL query built is not valid from an Oracle syntax point-of-view as the quotes around the schema and table have been removed in Debezium code hence leading to a non-valid SQL query (INSERT INTO DBZ.debezium_signal instead of INSERT INTO "DBZ"."_debezium_signal")
I found a workaround by setting the signal.data.collection property this way: '"""ASYRHE01"""."""DBZ"""."""__debezium_signal"""'
In this case one can trigger an ad-hoc snapshot from the kafka channel but the source channel does not work (probably because the SELECT statement cannot be built using this version of the property)
Suggestion to fix this (in getSignalTableName method): Add quotes around schema and table names
return "\"" + tableId.schema() + "\".\"" + tableId.table() + "\"" ;
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Yes
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
Yes, I can provide them if needed
How to reproduce the issue using our tutorial deployment?
<Your answer>
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>
- links to
-
RHEA-2024:139598 Red Hat build of Debezium 2.5.4 release