Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7546

None of log files contains offset SCN Error

XMLWordPrintable

    • False
    • None
    • False
    • Critical

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

         
              "class": "io.debezium.connector.oracle.OracleConnector",
              "type": "source",
              "version": "2.5.0.Final"
      ,

      What is the connector configuration?

      {{{}}
          "name": "abc_source_prestgi9d",
          "config": {
              "connector.class" : "io.debezium.connector.oracle.OracleConnector",
              "tasks.max" : "1",
              "topic.prefix": "abc",
              "database.server.name" : "abc4pgi9d",
              "database.hostname" : "x.x.x.x",
              "database.port": "1521",
              "database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=x.x.x.x)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=abc.com)(SERVER=DEDICATED)))",
              "rac.nodes": "x.x.x.x,x.x.x.x,x.x.x.x,x.x.x.x",
              "database.user" : "c##dbzuser",
              "database.password" : "xxxxxx",
              "database.dbname" : "CDB01",
              "database.pdb.name":"PDB01",
              "database.connection.adapter": "logminer",
              "log.mining.archive.log.only.mode": "false",
              "schema.include.list": "SCHEMA1",
              "table.include.list": "SCHEMA1.TABLE1",
              "snapshot.lock.timeout.ms":"5000",
              "snapshot.mode":"initial",
              "skipped.operations" : "none",
              "snapshot.locking.mode": "none",
              "log.retention.hours": "48",
              "schema.history.internal.kafka.bootstrap.servers" : "abc.servicebus.windows.net:9093", 
              "schema.history.internal.kafka.topic": "abc-sc-prestg9c",
              "schema.history.internal.kafka.security.protocol":"SASL_SSL",
              "schema.history.internal.kafka.sasl.mechanism":"PLAIN",
              "schema.history.internal.kafka.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='$ConnectionString' password='xxxx';",
          "schema.history.internal.producer.security.protocol":"SASL_SSL",
          "schema.history.internal.producer.sasl.mechanism":"PLAIN",
          "schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='$ConnectionString' password='xxxxx';",
          "schema.history.internal.consumer.security.protocol":"SASL_SSL",
          "schema.history.internal.consumer.sasl.mechanism":"PLAIN",
          "schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='$ConnectionString' password='xxxxx';",
          "schema.history.producer.security.protocol":"SASL_SSL",
          "schema.history.producer.sasl.mechanism":"PLAIN",
          "schema.history.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='$ConnectionString' password='xxxxx';",
          "schema.history.consumer.security.protocol":"SASL_SSL",
          "schema.history.consumer.sasl.mechanism":"PLAIN",
          "schema.history.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username='$ConnectionString' password='xxxxx';",
              "schema.history.internal.skip.unparseable.ddl": "true",
              "schema.history.internal.store.only.captured.tables.ddl": "true", 
              "schema.history.internal.store.only.captured.databases.ddl": "true",
              "topic.creation.default.replication.factor": "1",
              "topic.creation.default.log.retention.hours": "48",
              "topic.creation.default.retentionDescription.retentionTimeInHours": "48",
              "topic.creation.default.partitions": "8",
              "transforms": "route, ChangeTopicCase",
              "transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
              "transforms.route.regex": "([^.])
      .([^.]
      )

      .([^.]+)",
              "transforms.route.replacement": "$2.$3",
              "transforms.ChangeTopicCase.type": "com.github.jcustenborder.kafka.connect.transform.common.ChangeTopicCase",
              "transforms.ChangeTopicCase.from": "UPPER_UNDERSCORE",
              "transforms.ChangeTopicCase.to": "LOWER_UNDERSCORE",
              "time.precision.mode": "connect",
              "decimal.handling.mode": "double",
              "log.mining.transaction.retention.ms": "7200000",
              "lob.enabled": "true",
              "retries": "10",
              "errors.retry.timeout": "600000",
              "errors.retry.delay.max.ms": "30000",
              "errors.log.enable": "true",
              "errors.log.include.message": "true",
              "errors.tolerance": "all",
              "log.mining.archive.log.hours": "0",
              "log.mining.flush.table.name": "LOG_MINING_FLUSH_9A"

      Unknown macro: {{    }

      }}
      }

      What is the captured database version and mode of depoyment?

      On-premises

      Oracle Exadata, Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production

      What behaviour do you expect?

      The new oracle source connector is created capturing changes only on one table. The table name, schema, ip address, etc are masked for security reasons in provided configuration. Up on creation of the source connector, I would expect it to perform initial capture and start CDC process using LogMiner. 

      What behaviour do you see?

      The initial snapshot is successful, however, the CDC using LogMiner fails with following error. The required redo/archivelogs are available at the time of the issue.

      Fyi, attached the log trace for reference. The gv$archived_log/gv$log confirms the presence of needed redo/archivelogs.

      Caused by: java.lang.IllegalStateException: None of log files contains offset SCN: 3611729250065, re-snapshot is required.

      Do you see the same behaviour using the latest relesead Debezium version?

      Yes

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      Yes

      How to reproduce the issue using our tutorial deployment?

      Yes

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

            Unassigned Unassigned
            krishna.sarabu@gmail.com KRISHNA SARABU
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: