Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-6869

When the start_scn corresponding to the existence of a transaction in V$TRANSACTION is 0, log mining starts from the oldest scn when the oracle connector is started for the first time

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.4.0.CR1
    • 2.3.0.Final
    • oracle-connector
    • None
    • Important

      Bug report

      When the LOG_MINING_TRANSACTION_SNAPSHOT_BOUNDARY_MODE is configured as non-skip, and there is a transaction in V$TRANSACTION, the corresponding start_scn is 0, log mining from the oldest scn occurs when the oracle connector is started for the first time, and a fairly large span of SCN number range queries appears. Data that results in no log mining being distributed for a long time

      What Debezium connector do you use and what version?

      debezium-connector-oracle-2.3.0.Final

      What is the connector configuration?

      connector.class = io.debezium.connector.oracle.OracleConnector
      snapshot.locking.mode = none
      log.mining.buffer.drop.on.stop = false
      max.queue.size = 81920
      schema.include.list = TEST_CDC
      internal.log.mining.read.only = false
      topic.heartbeat.prefix = __debezium_heartbeat
      log.mining.strategy = online_catalog
      include.schema.changes = true
      schema.history.internal.store.only.captured.tables.ddl = true
      schema.history.internal.file.filename = /data/debezium/cdc/td1/history/schema.dat
      tombstones.on.delete = false
      unavailable.value.placeholder = __debezium_value
      topic.prefix = td1
      offset.storage.file.filename = /data/debezium/cdc/td1/offsets.dat
      poll.interval.ms = 100
      lob.enabled = true
      errors.retry.delay.initial.ms = 300
      log.mining.archive.log.only.mode = false
      value.converter = org.apache.kafka.connect.json.JsonConverter
      key.converter = org.apache.kafka.connect.json.JsonConverter
      database.user = C##TEST
      database.dbname = ORCL
      custom.retriable.exception = .(ORA-00600|ORA-01289|ORA-01291|ORA-31603|ORA-26824|ORA-26876|Invalid value: null used for required field: "typeName")(.|\s)
      offset.storage = org.apache.kafka.connect.storage.FileOffsetBackingStore
      database.pdb.name = PDB1
      database.connection.adapter = logminer
      log.mining.buffer.type = memory
      internal.log.mining.transaction.snapshot.boundary.mode = all
      offset.flush.timeout.ms = 5000
      errors.retry.delay.max.ms = 10000
      event.processing.failure.handling.mode = fail
      schema.history.internal.skip.unparseable.ddl = true
      log.mining.restart.connection = true
      database.port = 1521
      offset.flush.interval.ms = 5000
      schema.history.internal = io.debezium.storage.file.history.FileSchemaHistory
      log.mining.session.max.ms = 1800000
      errors.max.retries = -1
      database.hostname = 10.10.93.27
      database.password = ********
      name = td1
      log.mining.batch.size.default = 100000
      max.batch.size = 20480
      skipped.operations = none
      table.include.list = TEST_CDC.AB01

      What is the captured database version and mode of depoyment?

      ORACLE 19C

      What behaviour do you expect?

      Log mining starts from the database current sc when the connector is first started, or from the smallest start_scn in the active transaction at first startup

      What behaviour do you see?

      When an illegal value occurs in the start_scn in the obtained active transaction, log mining starts from the oldest scn

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      <Your answer>

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      <Your answer>

      How to reproduce the issue using our tutorial deployment?

      <Your answer>

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

        1. debezium-cdc.log
          61 kB
        2. image-2023-09-19-16-09-51-005.png
          image-2023-09-19-16-09-51-005.png
          2.51 MB
        3. image-2023-09-19-16-11-03-613.png
          image-2023-09-19-16-11-03-613.png
          2.44 MB
        4. offsets.png
          offsets.png
          67 kB

              ccranfor@redhat.com Chris Cranford
              butioy 柳青 杨 (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: