-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
None
-
None
-
False
-
None
-
False
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
2.4.0.Alpha2: Oracle-source and jdbc-sink
What is the connector configuration?
source:
{ "name": "vk_nau28_src", "connector.class" : "io.debezium.connector.oracle.OracleConnector", "tasks.max" : "1", "database.hostname" : "***", "database.port" : "1521", "database.user" : "debezium", "database.password" : "***", "database.dbname": "NAUMENT1", "database.connection.adapter": "logminer", "schema.history.internal.kafka.topic": "vk_nau28_src.schema-changes", "schema.history.internal.kafka.bootstrap.servers": "broker1:29092,broker3:29092,broker3:29092", "schema.history.internal.store.only.captured.tables.ddl": "true", "schema.history.internal.store.only.captured.databases.ddl": "true", "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "http://naument-sr:8081", "key.converter": "io.confluent.connect.avro.AvroConverter", "key.converter.schema.registry.url": "http://naument-sr:8081", "errors.log.enable": "true", "snapshot.lock.timeout.ms":"5000", "include.schema.changes": "true", "snapshot.mode":"always", "decimal.handling.mode": "precise", "lob.enabled": "true", "datatype.propagate.source.type": ".*", "log.mining.session.max.ms": "120000", "topic.prefix": "vk_nau28", "topic.creation.enable": "true", "topic.creation.default.partitions": "1", "topic.creation.default.include": "vk_nau28\\.*", "topic.creation.default.replication.factor": "1", "topic.creation.default.compression.type": "lz4", "topic.creation.default.retention.ms": "432000000", "table.include.list" : "DEBEZIUM.GBC_TBL_SERVICECALL_NC28"}
sink:
{ "name": "vk_nau28_sink", "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector", "connection.url": "jdbc:postgresql://***:5438/db_ods_test?currentSchema=naument1", "connection.username": "debeziumt", "connection.password": "***", "auto.evolve": "true", "auto.create": "true", "tasks.max": "1", "topics.regex": "vk_nau28.DEBEZIUM.GBC_TBL_SERVICECALL_NC28", "table.name.format": "vk_nau28_tbl_servicecall", "insert.mode": "upsert", "delete.enabled": "true", "primary.key.mode": "record_key", "quote.identifiers": "true", "schema.evolution": "basic", "value.converter": "io.confluent.connect.avro.AvroConverter", "key.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "http://naument-sr:8081", "key.converter.schema.registry.url": "http://naument-sr:8081"}
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
source db: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
target db:PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5), 64-bit
What behaviour do you expect?
<Your answer>
What behaviour do you see?
Initial source-table ddl (Oracle):
CREATE TABLE "DEBEZIUM"."GBC_TBL_SERVICECALL_NC28" ( "ID" NUMBER(19,0) NOT NULL ENABLE, "CREATION_DATE" TIMESTAMP (6) NOT NULL ENABLE, "CLAIM_TRANSFERDATE" DATE, "TITLE" VARCHAR2(4000 CHAR), "CLIENT_EMAIL" VARCHAR2(255 CHAR), "CLAIM_SUMRETURN" FLOAT(126), "CLAIM_POSTADDRESS" CLOB, "NEW_NUMBER" NUMBER(19,0), "NEW_VARCHAR" VARCHAR2(4000), PRIMARY KEY ("ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE, SUPPLEMENTAL LOG DATA (ALL) COLUMNS ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" LOB ("CLAIM_POSTADDRESS") STORE AS SECUREFILE ( TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 NOCACHE LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
Then I ran alter-cmands to add 2 columns and filled their values:
ALTER TABLE DEBEZIUM.GBC_TBL_SERVICECALL_NC28 ADD (new_date date); ALTER TABLE DEBEZIUM.GBC_TBL_SERVICECALL_NC28 ADD (new_ts TIMESTAMP);UPDATE DEBEZIUM.GBC_TBL_SERVICECALL_NC28 SET new_date = sysdate, new_ts = systimestamp WHERE id = 4;
Then I'm getting error in sink-connector:
Hibernate: ALTER TABLE "naument1"."vk_nau28_tbl_servicecall" ADD "NEW_TS" timestamp(6) NULL ADD "NEW_DATE" timestamp(6) NULL 2023-08-22 16:00:21,659 WARN || SQL Error: 0, SQLState: 42601 [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2023-08-22 16:00:21,659 ERROR || ERROR: syntax error at or near "ADD" Position: 82 [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2023-08-22 16:00:21,665 ERROR || Failed to process record: Failed to process a sink record [io.debezium.connector.jdbc.JdbcSinkConnectorTask] org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:82) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:93) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:587) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:336) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [ALTER TABLE "naument1"."vk_nau28_tbl_servicecall" ADD "NEW_TS" timestamp(6) NULL ADD "NEW_DATE" timestamp(6) NULL] at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654) at io.debezium.connector.jdbc.JdbcChangeEventSink.alterTableIfNeeded(JdbcChangeEventSink.java:205) at io.debezium.connector.jdbc.JdbcChangeEventSink.checkAndApplyTableChangesIfNeeded(JdbcChangeEventSink.java:127) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:78) ... 13 more
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
2.4.0.Alpha2
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
yes
How to reproduce the issue using our tutorial deployment?
<Your answer>
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>
- duplicates
-
DBZ-6999 ALTER TABLE fails when adding multiple columns to JDBC sink target
- Closed