-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
1.9.5.Final
-
None
-
False
-
None
-
False
-
Important
Bug report
DDL statement couldn't be parsed exception, during incremental snapshot execution
What Debezium connector do you use and what version?
oracle-connector, 1.9.5-final
What is the connector configuration?
{
"name": "oracle-customers-connector",
"config": {
"connector.class": "io.debezium.connector.oracle.OracleConnector",
"tasks.max": "1",
"database.server.name": "${database.server.name}",
"database.url": "${database.url}",
"database.history.kafka.bootstrap.servers": "${bootstrap.servers}",
"database.history.kafka.topic": "${history.topic}",
"database.dbname": "${database.name}",
"database.user": "${database.user}",
"database.password": "${database.password}",
"table.include.list": "DB.DEVICE,DB.CUSTOMER,DB.ADDRESS,DB.ECHANNEL,DB.PROFILE_PERSON,DB.KOM,DB.NMON,DB.INCOME,DB.LINKS,DB.DEBEZIUM_SIGNAL",
"signal.data.collection": "DBINSTANCE.DB.DEBEZIUM_SIGNAL",
"incremental.snapshot.chunk.size": "49152",
"snapshot.mode": "initial",
"snapshot.select.statement.overrides.DB.LINKS": "SELECT a.* from DB.LINKS a where a.is_active = 1",
"snapshot.select.statement.overrides.DB.ADDRESS": "select a.* from DB.ADDRESS a where a.is_active = 1",
"snapshot.select.statement.overrides.DB.ECHANNEL": "select a.* from DB.ECHANNEL a where a.is_active = 1",
"snapshot.select.statement.overrides.DB.PROFILE_PERSON": "select a.* from DB.PROFILE_PERSON a",
"snapshot.select.statement.overrides.DB.KOM": "select a.* from DB.KOM a",
"snapshot.select.statement.overrides.DB.NMON": "select a.* from DB.NMON a",
"snapshot.select.statement.overrides.DB.INCOME": "select a.* from DB.INCOME a where a.is_active = 1",
"snapshot.select.statement.overrides.DB.CUSTOMER": "select a.* from DB.CUSTOMER a",
"snapshot.select.statement.overrides.DB.DEVICE": "select a.* from DB.DEVICE a where a.is_active = 1",
"decimal.handling.mode": "double",
"security.protocol": "SSL",
"ssl.truststore.location": "/******/******/kafka_truststore.jks",
"ssl.truststore.password": "${truststore.password}",
"ssl.keystore.location": "/******/******/client_keystore.jks",
"ssl.keystore.password": "${keystore.password}",
"ssl.key.password": "${ssl.key.password}",
"database.history.producer.security.protocol": "SSL",
"database.history.producer.ssl.truststore.location": "/******/******/kafka_truststore.jks",
"database.history.producer.ssl.truststore.password": "${ruststore.password}",
"database.history.producer.ssl.keystore.location": "/******/******/client_keystore.jks",
"database.history.producer.ssl.keystore.password": "${ssl.keystore.password}",
"database.history.producer.ssl.key.password": "${ssl.key.password}",
"database.history.consumer.security.protocol": "SSL",
"database.history.consumer.ssl.truststore.location": "/******/******/kafka_truststore.jks",
"database.history.consumer.ssl.truststore.password": "${ssl.truststore.password}",
"database.history.consumer.ssl.keystore.location": "/******/******/client_keystore.jks",
"database.history.consumer.ssl.keystore.password": "${ssl.keystore.password}",
"database.history.consumer.ssl.key.password": "${ssl.key.password}",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url": "${schema.registry.url}",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "${schema.registry.url}",
"basic.auth.credentials.source": "USER_INFO",
"basic.auth.user.info": "${schema.registry.user}:${schema.registry.password}",
"key.converter.schema.registry.ssl.truststore.location": "/******/******/kafka_truststore.jks",
"key.converter.schema.registry.ssl.truststore.password": "${ssl.truststore.password}",
"key.converter.schema.registry.ssl.keystore.location": "/******/******/client_keystore.jks",
"key.converter.schema.registry.ssl.keystore.password": "${ssl.keystore.password}",
"key.converter.schema.registry.ssl.key.password": "${ssl.key.password}",
"value.converter.schema.registry.ssl.truststore.location": "/******/******/kafka_truststore.jks",
"value.converter.schema.registry.ssl.truststore.password": "${ssl.truststore.password}",
"value.converter.schema.registry.ssl.keystore.location": "/******/******/client_keystore.jks",
"value.converter.schema.registry.ssl.keystore.password": "${ssl.keystore.password}",
"value.converter.schema.registry.ssl.key.password": "${ssl.key.password}"
}
}
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
oracle 19c, aws ec2
What behaviour do you expect?
Incremental snapshot execution without interrupts
What behaviour do you see?
Exception during Incremental snapshot execution
Do you see the same behaviour using the latest relesead Debezium version?
It's not tested, we can use only 1.9.5-final version
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
"trace": "org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:222)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:60)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: Multiple parsing errors\nio.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement '-- Add/modify columns \nalter table DOCUMENTS modify is_electronic default on null 0\n;'\nextraneous input 'on' expecting
\nio.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement '-- Add/modify columns \nalter table DOCUMENTS modify is_electronic default on null 0\n;'\nextraneous input '0' expecting
{'DISABLE', 'ENABLE', ';'}\n\tat io.debezium.antlr.AntlrDdlParser.throwParsingException(AntlrDdlParser.java:372)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:95)\n\tat io.debezium.connector.oracle.antlr.OracleDdlParser.parse(OracleDdlParser.java:68)\n\tat io.debezium.connector.oracle.OracleSchemaChangeEventEmitter.emitSchemaChangeEvent(OracleSchemaChangeEventEmitter.java:84)\n\tat io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:302)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.handleSchemaChange(AbstractLogMinerEventProcessor.java:587)\n\tat io.debezium.connector.oracle.logminer.processor.memory.MemoryLogMinerEventProcessor.handleSchemaChange(MemoryLogMinerEventProcessor.java:213)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processRow(AbstractLogMinerEventProcessor.java:278)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processResults(AbstractLogMinerEventProcessor.java:242)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.process(AbstractLogMinerEventProcessor.java:188)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:210)\n\t... 9 more\n
How to reproduce the issue using our tutorial deployment?
I don't know, it's a floating problem
- duplicates
-
DBZ-5605 Oracle DDL does not support DEFAULT ON NULL
- Closed