Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5956

DDL statement couldn't be parsed exception, during incremental snapshot execution

    XMLWordPrintable

Details

    • Bug
    • Resolution: Duplicate
    • Major
    • None
    • 1.9.5.Final
    • oracle-connector
    • None
    • False
    • None
    • False
    • Important

    Description

      Bug report

      DDL statement couldn't be parsed exception, during incremental snapshot execution

      What Debezium connector do you use and what version?

      oracle-connector, 1.9.5-final

      What is the connector configuration?

      {
      "name": "oracle-customers-connector",
      "config": {
      "connector.class": "io.debezium.connector.oracle.OracleConnector",
      "tasks.max": "1",
      "database.server.name": "${database.server.name}",
      "database.url": "${database.url}",
      "database.history.kafka.bootstrap.servers": "${bootstrap.servers}",
      "database.history.kafka.topic": "${history.topic}",
      "database.dbname": "${database.name}",
      "database.user": "${database.user}",
      "database.password": "${database.password}",
      "table.include.list": "DB.DEVICE,DB.CUSTOMER,DB.ADDRESS,DB.ECHANNEL,DB.PROFILE_PERSON,DB.KOM,DB.NMON,DB.INCOME,DB.LINKS,DB.DEBEZIUM_SIGNAL",
      "signal.data.collection": "DBINSTANCE.DB.DEBEZIUM_SIGNAL",
      "incremental.snapshot.chunk.size": "49152",
      "snapshot.mode": "initial",
      "snapshot.select.statement.overrides.DB.LINKS": "SELECT a.* from DB.LINKS a where a.is_active = 1",
      "snapshot.select.statement.overrides.DB.ADDRESS": "select a.* from DB.ADDRESS a where a.is_active = 1",
      "snapshot.select.statement.overrides.DB.ECHANNEL": "select a.* from DB.ECHANNEL a where a.is_active = 1",
      "snapshot.select.statement.overrides.DB.PROFILE_PERSON": "select a.* from DB.PROFILE_PERSON a",
      "snapshot.select.statement.overrides.DB.KOM": "select a.* from DB.KOM a",
      "snapshot.select.statement.overrides.DB.NMON": "select a.* from DB.NMON a",
      "snapshot.select.statement.overrides.DB.INCOME": "select a.* from DB.INCOME a where a.is_active = 1",
      "snapshot.select.statement.overrides.DB.CUSTOMER": "select a.* from DB.CUSTOMER a",
      "snapshot.select.statement.overrides.DB.DEVICE": "select a.* from DB.DEVICE a where a.is_active = 1",
      "decimal.handling.mode": "double",
      "security.protocol": "SSL",
      "ssl.truststore.location": "/******/******/kafka_truststore.jks",
      "ssl.truststore.password": "${truststore.password}",
      "ssl.keystore.location": "/******/******/client_keystore.jks",
      "ssl.keystore.password": "${keystore.password}",
      "ssl.key.password": "${ssl.key.password}",
      "database.history.producer.security.protocol": "SSL",
      "database.history.producer.ssl.truststore.location": "/******/******/kafka_truststore.jks",
      "database.history.producer.ssl.truststore.password": "${ruststore.password}",
      "database.history.producer.ssl.keystore.location": "/******/******/client_keystore.jks",
      "database.history.producer.ssl.keystore.password": "${ssl.keystore.password}",
      "database.history.producer.ssl.key.password": "${ssl.key.password}",
      "database.history.consumer.security.protocol": "SSL",
      "database.history.consumer.ssl.truststore.location": "/******/******/kafka_truststore.jks",
      "database.history.consumer.ssl.truststore.password": "${ssl.truststore.password}",
      "database.history.consumer.ssl.keystore.location": "/******/******/client_keystore.jks",
      "database.history.consumer.ssl.keystore.password": "${ssl.keystore.password}",
      "database.history.consumer.ssl.key.password": "${ssl.key.password}",
      "key.converter": "io.confluent.connect.avro.AvroConverter",
      "key.converter.schema.registry.url": "${schema.registry.url}",
      "value.converter": "io.confluent.connect.avro.AvroConverter",
      "value.converter.schema.registry.url": "${schema.registry.url}",
      "basic.auth.credentials.source": "USER_INFO",
      "basic.auth.user.info": "${schema.registry.user}:${schema.registry.password}",
      "key.converter.schema.registry.ssl.truststore.location": "/******/******/kafka_truststore.jks",
      "key.converter.schema.registry.ssl.truststore.password": "${ssl.truststore.password}",
      "key.converter.schema.registry.ssl.keystore.location": "/******/******/client_keystore.jks",
      "key.converter.schema.registry.ssl.keystore.password": "${ssl.keystore.password}",
      "key.converter.schema.registry.ssl.key.password": "${ssl.key.password}",
      "value.converter.schema.registry.ssl.truststore.location": "/******/******/kafka_truststore.jks",
      "value.converter.schema.registry.ssl.truststore.password": "${ssl.truststore.password}",
      "value.converter.schema.registry.ssl.keystore.location": "/******/******/client_keystore.jks",
      "value.converter.schema.registry.ssl.keystore.password": "${ssl.keystore.password}",
      "value.converter.schema.registry.ssl.key.password": "${ssl.key.password}"
      }
      }

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      oracle 19c, aws ec2

      What behaviour do you expect?

      Incremental snapshot execution without interrupts

      What behaviour do you see?

      Exception during Incremental snapshot execution

      Do you see the same behaviour using the latest relesead Debezium version?

      It's not tested, we can use only 1.9.5-final version

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)
      "trace": "org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:222)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:60)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: Multiple parsing errors\nio.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement '-- Add/modify columns \nalter table DOCUMENTS modify is_electronic default on null 0\n;'\nextraneous input 'on' expecting

      {'ABORT', 'ABS', 'ACCESS', 'ACCESSED', 'ACCOUNT', 'ACL', 'ACOS', 'ACTION', 'ACTIONS', 'ACTIVATE', 'ACTIVE', 'ACTIVE_COMPONENT', 'ACTIVE_DATA', 'ACTIVE_FUNCTION', 'ACTIVE_TAG', 'ACTIVITY', 'ADAPTIVE_PLAN', 'ADD', 'ADD_COLUMN', 'ADD_GROUP', 'ADD_MONTHS', 'ADJ_DATE', 'ADMIN', 'ADMINISTER', 'ADMINISTRATOR', 'ADVANCED', 'ADVISE', 'ADVISOR', 'AFD_DISKSTRING', 'AFTER', 'AGENT', 'AGGREGATE', 'A', 'ALIAS', 'ALL', 'ALLOCATE', 'ALLOW', 'ALL_ROWS', 'ALWAYS', 'ANALYZE', 'ANCILLARY', 'AND_EQUAL', 'ANOMALY', 'ANSI_REARCH', 'BINARY', 'BINARY_DOUBLE', 'BINARY_DOUBLE_INFINITY', 'BINARY_DOUBLE_NAN', 'BINARY_FLOAT', 'BINARY_FLOAT_INFINITY', 'BINARY_FLOAT_NAN', 'CELL_FLASH_CACHE', 'CERTIFICATE', 'CFILE', 'CHAINED', 'CHANGE', 'CHANGE_DUPKEY_ERROR_INDEX', 'CHARACTER', 'CHAR', 'CHAR_CS', 'CHARTOROWID', 'CHECK_ACL_REWRITE', 'CHECKPOINT', 'CHILD', 'CHOOSE', 'CHR', 'CHUNK', 'CLASS', 'CLASSIFIER', 'CLEANUP', 'CLEAR', 'C', 'CLIENT', 'CLOB', 'CLONE', 'CLOSE_CACHED_OPEN_CURSORS', 'CLOSE', 'CLUSTER_BY_ROWID', 'CLUSTER', 'CLUSTER_DETAILS', 'CLUSTER_DISTANCE', 'CLUSTER_ID', 'CLUSTERING', 'CLUSTERING_FACTOR', 'CLUSTER_PROBABILITY', 'CLUSTER_SET', 'COALESCE', 'COALESCE_SQ', 'COARSE', 'CO_AUTH_IND', 'COLD', 'COLLECT', 'COLUMNAR', 'COLUMN_AUTH_INDICATOR', 'COLUMN', 'COLUMNS', 'COLUMN_STATS', 'COLUMN_VALUE', 'COMMENT', 'COMMIT', 'COMMITTED', 'COMMON_DATA', 'COMPACT', 'COMPATIBILITY', 'COMPILE', 'COMPLETE', 'COMPLIANCE', 'COMPONENT', 'COMPONENTS', 'COMPOSE', 'COMPOSITE', 'COMPOSITE_LIMIT', 'COMPOUND', 'COMPUTE', 'CONCAT', 'CON_DBID_TO_ID', 'CONDITIONAL', 'CONDITION', 'CONFIRM', 'CONFORMING', 'CON_GUID_TO_ID', 'CON_ID', 'CON_NAME_TO_ID', 'CONNECT_BY_CB_WHR_ONLY', 'CONNECT_BY_COMBINE_SW', 'CONNECT_BY_COST_BASED', 'CONNECT_BY_ELIM_DUPS', 'CONNECT_BY_FILTERING', 'CONNECT_BY_ISCYCLE', 'CONNECT_BY_ISLEAF', 'CONNECT_BY_ROOT', 'CONNECT_TIME', 'CONSIDER', 'CONSISTENT', 'CONSTANT', 'CONST', 'CONSTRAINT', 'CONSTRAINTS', 'CONSTRUCTOR', 'CONTAINER', 'CONTAINER_DATA', 'CONTAINERS', 'CONTENT', 'CONTENTS', 'CONTEXT', 'CONTINUE', 'CONTROLFILE', 'CON_UID_TO_ID', 'CONVERT', 'COOKIE', 'COPY', 'CORR_K', 'CORR_S', 'CORRUPTION', 'CORRUPT_XID_ALL', 'CORRUPT_XID', 'COS', 'COSH', 'COST', 'COST_XML_QUERY_REWRITE', 'COUNT', 'COVAR_POP', 'COVAR_SAMP', 'CPU_COSTING', 'CPU_PER_CALL', 'CPU_PER_SESSION', 'CRASH', 'CREATE_FILE_DEST', 'CREATE_STORED_OUTLINES', 'CREATION', 'CREDENTIAL', 'CRITICAL', 'CROSS', 'CROSSEDITION', 'CSCONVERT', 'CUBE_AJ', 'CUBE', 'CUBE_GB', 'CUBE_SJ', 'CUME_DISTM', 'CURRENT', 'CURRENT_DATE', 'CURRENT_SCHEMA', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'CURRENTV', 'CURSOR', 'CURSOR_SHARING_EXACT', 'CURSOR_SPECIFIC_SEGMENT', 'CUSTOMDATUM', 'CV', 'CYCLE', 'DANGLING', 'DATABASE', 'DATA', 'DATAFILE', 'DATAFILES', 'FULL_OUTER_JOIN_TO_OUTER', 'FUNCTION', 'FUNCTIONS', 'GATHER_OPTIMIZER_STATISTICS', 'GATHER_PLAN_STATISTICS', 'GBY_CONC_ROLLUP', 'GBY_PUSHDOWN', 'GENERATED', 'GET', 'GLOBAL', 'GLOBALLY', 'GLOBAL_NAME', 'GLOBAL_TOPIC_ENABLED', 'GROUP_BY', 'GROUP_ID', 'GROUPING', 'GROUPING_ID', 'GROUPS', 'GUARANTEED', 'GUARANTEE', 'GUARD', 'HASH_AJ', 'HASH', 'HASHKEYS', 'HASH_SJ', 'HEADER', 'HEAP', 'HELP', 'HEXTORAW', 'HEXTOREF', 'HIDDEN', 'HIDE', 'HIERARCHY', 'HIGH', 'HINTSET_BEGIN', 'HINTSET_END', 'HOT', 'HOUR', 'HWM_BROKERED', 'HYBRID', 'IDENTIFIER', 'IDENTITY', 'IDGENERATORS', 'ID', 'IDLE_TIME', 'IF', 'IGNORE', 'IGNORE_OPTIM_EMBEDDED_HINTS', 'IGNORE_ROW_ON_DUPKEY_INDEX', 'IGNORE_WHERE_CLAUSE', 'ILM', 'IMMEDIATE', 'IMPACT', 'IMPORT', 'INACTIVE', 'INCLUDE', 'INCLUDE_VERSION', 'INCLUDING', 'INCREMENTAL', 'INCREMENT', 'INCR', 'INDENT', 'INDEX_ASC', 'INDEX_COMBINE', 'INDEX_DESC', 'INDEXED', 'INDEXES', 'INDEX_FFS', 'INDEX_FILTER', 'INDEXING', 'INDEX_JOIN', 'INDEX_ROWS', 'INDEX_RRS', 'INDEX_RS_ASC', 'INDEX_RS_DESC', 'INDEX_RS', 'INDEX_SCAN', 'INDEX_SKIP_SCAN', 'INDEX_SS_ASC', 'INDEX_SS_DESC', 'INDEX_SS', 'INDEX_STATS', 'INDEXTYPE', 'INDEXTYPES', 'INDICATOR', 'INDICES', 'INFINITE', 'INFORMATIONAL', 'INHERIT', 'INITCAP', 'INITIAL', 'INITIALIZED', 'INITIALLY', 'INITRANS', 'INLINE', 'INLINE_XMLTYPE_NT', 'INMEMORY', 'IN_MEMORY_METADATA', 'INMEMORY_PRUNING', 'INNER', 'INOUT', 'INPLACE', 'INSERTCHILDXMLAFTER', 'INSERTCHILDXMLBEFORE', 'INSERTCHILDXML', 'YEARS', 'YEAR', 'YES', 'YMINTERVAL_UNCONSTRAINED', 'ZONEMAP', 'ZONE', 'PREDICTION', 'PREDICTION_BOUNDS', 'PREDICTION_COST', 'PREDICTION_DETAILS', 'PREDICTION_PROBABILITY', 'PREDICTION_SET', 'CUME_DIST', 'DENSE_RANK', 'LISTAGG', 'PERCENT_RANK', 'PERCENTILE_CONT', 'PERCENTILE_DISC', 'RANK', 'AVG', 'CORR', 'COVAR_', 'DECODE', 'LAG', 'LEAD', 'MAX', 'MEDIAN', 'MIN', 'NTILE', 'NVL', 'RATIO_TO_REPORT', 'REGR_', 'ROUND', 'ROW_NUMBER', 'SUBSTR', 'TO_CHAR', 'TRIM', 'SUM', 'STDDEV', 'VAR_', 'VARIANCE', 'LEAST', 'GREATEST', 'TO_DATE', NATIONAL_CHAR_STRING_LIT, '.', UNSIGNED_INTEGER, APPROXIMATE_NUM_LIT, CHAR_STRING, DELIMITED_ID, '(', '+', '-', BINDVAR, ':', '_', REGULAR_ID}

      \nio.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement '-- Add/modify columns \nalter table DOCUMENTS modify is_electronic default on null 0\n;'\nextraneous input '0' expecting

      {'DISABLE', 'ENABLE', ';'}

      \n\tat io.debezium.antlr.AntlrDdlParser.throwParsingException(AntlrDdlParser.java:372)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:95)\n\tat io.debezium.connector.oracle.antlr.OracleDdlParser.parse(OracleDdlParser.java:68)\n\tat io.debezium.connector.oracle.OracleSchemaChangeEventEmitter.emitSchemaChangeEvent(OracleSchemaChangeEventEmitter.java:84)\n\tat io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:302)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.handleSchemaChange(AbstractLogMinerEventProcessor.java:587)\n\tat io.debezium.connector.oracle.logminer.processor.memory.MemoryLogMinerEventProcessor.handleSchemaChange(MemoryLogMinerEventProcessor.java:213)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processRow(AbstractLogMinerEventProcessor.java:278)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processResults(AbstractLogMinerEventProcessor.java:242)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.process(AbstractLogMinerEventProcessor.java:188)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:210)\n\t... 9 more\n

      How to reproduce the issue using our tutorial deployment?

      I don't know, it's a floating problem

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              dmytrii.shabotin@raiffeisen.ua Dmytrii Shabotin (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: