Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7609

Oracle LogMiner fails with error - FlushTable already exists

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Major Major
    • None
    • 2.5.2.Final, 2.6.0.Beta1
    • oracle-connector
    • None
    • False
    • None
    • False
    • Critical

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      io.debezium.connector.oracle.OracleConnector 2.4

      What is the connector configuration?

      {
      "name": "oracle_amr_dev_source_poc_full_load_batch_cdc3",
      "config": {
      "connector.class": "io.debezium.connector.oracle.OracleConnector",
      "errors.log.include.messages": "true",
      "topic.creation.default.partitions": "1",
      "database.history.consumer.sasl.jaas.config": "***",
      "schema.history.internal.consumer.sasl.jaas.config": "***",
      "confluent.topic.ssl.endpoint.identification.algorithm": "https",
      "database.history.consumer.security.protocol": "SASL_SSL",
      "database.history.kafka.recovery.attempts": "10",
      "provide.transaction.metadata": "true",
      "tombstones.on.delete": "false",
      "topic.prefix": "oracle_amr_dev_ireach_batch_cdc",
      "decimal.handling.mode": "double",
      "schema.history.internal.kafka.topic": "dbhistory.oracle_amr_dev_ireach_batch_cdc3",
      "schema.history.internal.producer.security.protocol": "SASL_SSL",
      "topic.creation.default.replication.factor": "3",
      "errors.log.enable": "true",
      "schema.history.internal.producer.sasl.mechanism": "PLAIN",
      "database.history.producer.sasl.mechanism": "PLAIN",
      "database.history.producer.sasl.jaas.config": "***",
      "database.user": "lwkdlkshc",
      "database.dbname": "devcaw.com",
      "topic.creation.default.compression.type": "snappy",
      "confluent.topic.bootstrap.servers": "****",
      "topic.creation.default.cleanup.policy": "delete",
      "database.history.producer.security.protocol": "SASL_SSL",
      "log.mining.flush.table.name": "oracle_amr_dev_ireach_batch_cdc3",
      "database.history.kafka.bootstrap.servers": "***",
      "database.server.name": "oracle_amr_dev_ireach_batch_cdc3",
      "snapshot.isolation.mode": "read_committed",
      "schema.history.internal.kafka.bootstrap.servers": "***",
      "confluent.license.topic.replication.factor": "1",
      "database.port": "1521",
      "max.request.size": "2097164",
      "database.hostname": "",
      "database.password": "",
      "name": "oracle_amr_dev_source_poc_full_load_batch_cdc3",
      "schema.history.internal.consumer.sasl.mechanism": "PLAIN",
      "schema.history.internal.producer.sasl.jaas.config": "****",
      "table.include.list": "IREACH.FEATURE_EFFECTIVE_DATE,IREACH.FEEDER_SYSTEM_IDS,IREACH.FF_CARRIER_CODES",
      "database.history.consumer.sasl.mechanism": "PLAIN",
      "snapshot.mode": "schema_only",
      "confluent.topic.request.timeout.ms": "20000",
      "schema.history.internal.consumer.security.protocol": "SASL_SSL"
      }}

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      on-premises

      What behaviour do you expect?

      I expect the connector should stream the data smoothly without any issues

      What behaviour do you see?

      When I deploy CDC connectors, the connectors streams the data for a while and fails with the attached error.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Yes

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)
      "org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:67)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:262)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:62)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:272)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:197)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:137)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: io.debezium.connector.oracle.logminer.parser.DmlParserException: DML statement couldn't be parsed. Please open a Jira issue with the statement '/* No SQL_REDO for temporary tables /'.\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1259)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.lambda$handleDataEvent$9(AbstractLogMinerEventProcessor.java:1045)\n\tat io.debezium.connector.oracle.logminer.processor.memory.MemoryLogMinerEventProcessor.addToTransaction(MemoryLogMinerEventProcessor.java:275)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.handleDataEvent(AbstractLogMinerEventProcessor.java:1044)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processRow(AbstractLogMinerEventProcessor.java:366)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.processResults(AbstractLogMinerEventProcessor.java:289)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.process(AbstractLogMinerEventProcessor.java:219)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:241)\n\t... 9 more\nCaused by: io.debezium.connector.oracle.logminer.parser.DmlParserException: Unknown supported SQL '/ No SQL_REDO for temporary tables */'\n\tat io.debezium.connector.oracle.logminer.parser.LogMinerDmlParser.parse(LogMinerDmlParser.java:80)\n\tat io.debezium.connector.oracle.logminer.processor.AbstractLogMinerEventProcessor.parseDmlStatement(AbstractLogMinerEventProcessor.java:1253)\n\t... 16 more\n"

      and also similar configuration with another connector it gives a different type of errorĀ 

      "org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:67)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:262)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:62)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:272)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:197)\n\tat io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:137)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: io.debezium.DebeziumException: Failed to create flush table\n\tat io.debezium.connector.oracle.logminer.logwriter.CommitLogWriterFlushStrategy.createFlushTableIfNotExists(CommitLogWriterFlushStrategy.java:133)\n\tat io.debezium.connector.oracle.logminer.logwriter.CommitLogWriterFlushStrategy.<init>(CommitLogWriterFlushStrategy.java:54)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.resolveFlushStrategy(LogMinerStreamingChangeEventSource.java:973)\n\tat io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:156)\n\t... 9 more\nCaused by: java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by an existing object\n\nhttps://docs.oracle.com/error-help/db/ora-00955/\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:702)\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:608)\n\tat oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1277)\n\tat oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:1102)\n\tat oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:456)\n\tat oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:482)\n\tat oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:170)\n\tat oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1117)\n\tat oracle.jdbc.driver.OracleStatement.executeSQLStatement(OracleStatement.java:1652)\n\tat oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1417)\n\tat oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:2278)\n\tat oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:2227)\n\tat oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:330)\n\tat io.debezium.jdbc.JdbcConnection.executeWithoutCommitting(JdbcConnection.java:1449)\n\tat io.debezium.connector.oracle.logminer.logwriter.CommitLogWriterFlushStrategy.createFlushTableIfNotExists(CommitLogWriterFlushStrategy.java:122)\n\t... 12 more\nCaused by: Error : 955, Position : 13, SQL = CREATE TABLE oracle_amr_dev_ireach_batch_cdc4_1 (LAST_SCN NUMBER(19,0)), Original SQL = CREATE TABLE oracle_amr_dev_ireach_batch_cdc4_1 (LAST_SCN NUMBER(19,0)), Error Message = ORA-00955: name is already used by an existing object\n\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:710)\n\t... 26 more\n"

      How to reproduce the issue using our tutorial deployment?

      <Your answer>

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

            Unassigned Unassigned
            abhilashreddy9676@gmail.com abhilash Reddy
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: