Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5108

Debezium connector failed with create table statement

    XMLWordPrintable

Details

    Description

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      1.9.2

      What is the connector configuration?

      {
          "name": "dfc-debezium-oracle-cdc-source-connector-v5",
          "config": {
              "snapshot.locking.mode": "shared",
              "connector.class": "io.debezium.connector.oracle.OracleConnector",
              "rac.nodes": "****,****",
              "database.history.consumer.sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"*************\" password=\"******************\";",
              "tasks.max": "1",
              "database.history.kafka.topic": "dfc-database-perf-test-history-internal-topic",
              "transforms": "dropPrefix",
              "database.history.consumer.ssl.truststore.password": "********************",
              "schema.include.list": "SCHEMA1,SCHEMA2",
              "transforms.dropPrefix.regex": "dbzdfcsrvr.(.*)",
              "database.history.consumer.security.protocol": "SASL_SSL",
              "database.history.consumer.ssl.truststore.location": "/opt/kafka/external-configuration/connector-external-config/truststore.jks",
              "log.mining.strategy": "redo_log_catalog",
              "provide.transaction.metadata": "false",
              "database.history.consumer.ssl.endpoint.identification.algorithm": "",
              "poll.interval.ms": "30000",
              "value.converter": "org.apache.kafka.connect.json.JsonConverter",
              "key.converter": "org.apache.kafka.connect.json.JsonConverter",
              "database.history.producer.sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"*********\" password=\"****************\";",
              "database.history.producer.sasl.mechanism": "SCRAM-SHA-512",
              "transforms.dropPrefix.replacement": "$1",
              "database.dbname": "DBNAME",
              "database.user": "DBZUSER",
              "database.history.producer.ssl.endpoint.identification.algorithm": "",
              "database.connection.adapter": "logminer",
              "database.history.producer.security.protocol": "SASL_SSL",
              "database.history.kafka.bootstrap.servers": "kite-ent-kafka-bootstrap:9093",
              "database.history.producer.ssl.truststore.location": "/opt/kafka/external-configuration/connector-external-config/truststore.jks",
              "database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=SERVICE_SVC)))",
              "time.precision.mode": "connect",
              "database.server.name": "dbzdfcsrvr",
              "transforms.dropPrefix.type": "org.apache.kafka.connect.transforms.RegexRouter",
              "database.port": "1521",
              "key.converter.schemas.enable": "true",
              "database.history.producer.ssl.truststore.password": "********************",
              "database.password": "***************",
              "name": "dfc-debezium-oracle-cdc-source-connector-v5",
              "value.converter.schemas.enable": "true",
              "table.include.list": "SCHEMA1.TABLE_NAME",
              "database.history.consumer.sasl.mechanism": "SCRAM-SHA-512",
              "snapshot.mode": "schema_only"
          }
      }
      

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      Oracle 12cR2

      What behaviour do you expect?

      Connector continues to do log mining.

      What behaviour do you see?

      Connector failed.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Yes, we ran into this issue on 1.9.2 final.

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      Yes.

      org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.              at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)              at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:199)              at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:59)              at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174)              at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141)              at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109)              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)              at java.base/java.lang.Thread.run(Thread.java:834)Caused by: Multiple parsing errorsio.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'create table InversePickingAssignmentGroup ( name nvarchar2(20), areaId nvarchar2(10), whLocId nvarchar2(20), enabled number(1) default 0) tablespace WAMASDATA;'extraneous input 'enabled' expecting {'ABORT', 'ABS', 'ACCESS', 'ACCESSED', 'ACCOUNT', 'ACL', 'ACOS', 'ACTION', 'ACTIONS', 'ACTIVATE', 'ACTIVE', 'ACTIVE_COMPONENT', 'ACTIVE_DATA', 'ACTIVE_FUNCTION', 'ACTIVE_TAG', 'ACTIVITY', 'ADAPTIVE_PLAN', 'ADD', 'ADD_COLUMN', 'ADD_GROUP', 'ADD_MONTHS', 'ADJ_DATE', 'ADMIN', 'ADMINISTER', 'ADMINISTRATOR', 'ADVANCED', 'ADVISE', 'ADVISOR', 'AFD_DISKSTRING', 'AFTER', 'AGENT', 'AGGREGATE', 'A', 'ALIAS', 'ALLOCATE', 'ALLOW', 'ALL_ROWS', 'ALWAYS', 'ANALYZE', 'ANCILLARY', 'AND_EQUAL', 'ANOMALY', 'ANSI_REARCH', 'ANTIJOIN', 'ANYSCHEMA', 'APPEND', 'APPENDCHILDXML', 'APPEND_VALUES', 'APPLICATION', 'APPLY', 'APPROX_COUNT_DISTINCT', 'ARCHIVAL', 'ARCHIVE', 'ARCHIVED', 'ARCHIVELOG', 'ARRAY', 'ASCII', 'ASCIISTR', 'ASIN', 'ASIS', 'ASSEMBLY', 'ASSIGN', 'ASSOCIATE', 'ASYNC', 'ASYNCHRONOUS', 'ATAN2', 'ATAN', 'AT', 'ATTRIBUTE', 'ATTRIBUTES', 'AUTHENTICATED', 'AUTHENTICATION', 'AUTHID', 'AUTHORIZATION', 'AUTOALLOCATE', 'AUTO', 'AUTOEXTEND', 'AUTO_LOGIN', 'AUTOMATIC', 'AUTONOMOUS_TRANSACTION', 'AUTO_REOPTIMIZE', 'AVAILABILITY', 'AVRO', 'BACKGROUND', 'BACKUP', 'BASIC', 'BASICFILE', 'BATCH', 'BATCHSIZE', 'BATCH_TABLE_ACCESS_BY_ROWID', 'BECOME', 'BEFORE', 'BEGIN', 'BEGINNING', 'BEGIN_OUTLINE_DATA', 'BEHALF', 'BEQUEATH', 'BFILE', 'BFILENAME', 'BIGFILE', 'BINARY', 'BINARY_DOUBLE', 'BINARY_DOUBLE_INFINITY', 'BINARY_DOUBLE_NAN', 'BINARY_FLOAT', 'BINARY_FLOAT_INFINITY', 'BINARY_FLOAT_NAN', 'BINARY_INTEGER', 'BIND_AWARE', 'BINDING', 'BIN_TO_NUM', 'BITAND', 'BITMAP_AND', 'BITMAP', 'BITMAPS', 'BITMAP_TREE', 'BITS', 'BLOB', 'BLOCK', 'BLOCK_RANGE', 'BLOCKS', 'BLOCKSIZE', 'BODY', 'BOOLEAN', 'BOTH', 'BOUND', 'BRANCH', 'BREADTH', 'BROADCAST', 'BSON', 'BUFFER', 'BUFFER_CACHE', 'BUFFER_POOL', 'BUILD', 'BULK', 'BYPASS_RECURSIVE_CHECK', 'BYPASS_UJVC', 'BYTE', 'CACHE', 'CACHE_CB', 'CACHE_INSTANCES', 'CACHE_TEMP_TABLE', 'CACHING', 'CALCULATED', 'CALLBACK', 'CALL', 'CANCEL', 'CAPACITY', 'CARDINALITY', 'CASCADE', 'CASE', 'CAST', 'CATEGORY', 'CDB$DEFAULT', 'CEIL', 'CELL_FLASH_CACHE', 'CERTIFICATE', 'CFILE', 'CHAINED', 'CHANGE', 'CHANGE_DUPKEY_ERROR_INDEX', 'CHARACTER', 'CHAR', 'CHAR_CS', 'CHARTOROWID', 'CHECK_ACL_REWRITE', 'CHECK', 'CHECKPOINT', 'CHILD', 'CHOOSE', 'CHR', 'CHUNK', 'CLASS', 'CLASSIFIER', 'CLEANUP', 'CLEAR', 'C', 'CLIENT', 'CLOB', 'CLONE', 'CLOSE_CACHED_OPEN_CURSORS', 'CLOSE', 'CLUSTER_BY_ROWID', 'CLUSTER', 'CLUSTER_DETAILS', 'CLUSTER_DISTANCE', 'CLUSTER_ID', 'CLUSTERING', 'CLUSTERING_FACTOR', 'CLUSTER_PROBABILITY', 'CLUSTER_SET', 'COALESCE', 'COALESCE_SQ', 'COARSE', 'CO_AUTH_IND', 'COLD', 'COLLECT', 'COLUMNAR', 'COLUMN_AUTH_INDICATOR', 'COLUMN', 'COLUMNS', 'COLUMN_STATS', 'COLUMN_VALUE', 'COMMENT', 'COMMIT', 'COMMITTED', 'COMMON_DATA', 'COMPACT', 'COMPATIBILITY', 'COMPILE', 'COMPLETE', 'COMPLIANCE', 'COMPONENT', 'COMPONENTS', 'COMPOSE', 'COMPOSITE', 'COMPOSITE_LIMIT', 'COMPOUND', 'COMPUTE', 'CONCAT', 'CON_DBID_T

      io.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'create table InversePickingAssignmentGroup ( name nvarchar2(20), areaId nvarchar2(10), whLocId nvarchar2(20), enabled number(1) default 0) tablespace WAMASDATA;'

       

      How to reproduce the issue using our tutorial deployment?

      Create a table using schema owner.

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      Not applicable.

      Implementation ideas (optional)

      This new table is not included in config file. Connector should able to ignore creating new tables that are not listed config file.

      Attachments

        Issue Links

          Activity

            People

              ccranfor@redhat.com Chris Cranford
              dbzusr_sri Sri M (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: