Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-6256

Lock contention on LOG_MINING_FLUSH table when multiple connectors deployed

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.3.0.Alpha1
    • 1.9.6.Final, 1.9.7.Final, 2.1.3.Final, 2.2.0.Alpha3
    • oracle-connector
    • None

      Bug report

      What Debezium connector do you use and what version?

      • 1.9.6.Final
      • 1.9.7.Final
      • 2.1.3.Final
      • 2.2.0.Alpha3

      What is the connector configuration?

      • 2.1.3.Final & 2.2.0.Alpha3
        {
          "connector.class": "io.debezium.connector.oracle.OracleConnector",
          "tasks.max": "1",
          "snapshot.mode": "schema_only",
          "schema.include.list": "TESTUSER",
          "table.include.list": "TESTUSER\\.(DBZ_TEST_TABLE)",
          "database.server.name": "ORA",
          "schema.history.internal.kafka.topic": "schema-changes.dbz_oracle",
          "schema.history.internal.kafka.bootstrap.servers": "kafka1:9092",
          "database.history.skip.unparseable.ddl": "true",
          "topic.prefix": "ORCLPDB1",
          "database.pdb.name": "ORCLPDB1",
          "database.hostname": "oracle",
          "database.port": "1521",
          "database.user": "c##dbzuser",
          "database.password": "dbz",
          "database.connection.adapter": "logminer",
          "heartbeat.interval.ms": "3000",
          "log.mining.strategy": "online_catalog",
          "log.mining.scn.gap.detection.gap.size.min": "500000",
          "log.mining.view.fetch.size": "5000",
          "log.mining.batch.size.min": "500",
          "log.mining.batch.size.default": "10000",
          "log.mining.scn.gap.detection.time.interval.max.ms": "10000",
          "time.precision.mode": "connect",
          "tombstones.on.delete": "false",
          "decimal.handling.mode": "double",
          "lob.enabled": "true",
          "max.queue.size": "5000",
          "poll.interval.ms": "500",
          "producer.override.linger.ms": "5",
          "producer.override.batch.size": "163840"
        }
        
      • 1.9.6.Final & 1.9.7.Final
        {
          "connector.class": "io.debezium.connector.oracle.OracleConnector",
          "tasks.max": "1",
          "snapshot.mode": "schema_only",
          "schema.include.list": "TESTUSER",
          "table.include.list": "TESTUSER\\.(DBZ_TEST_TABLE)",
          "database.server.name": "ORA",
          "database.history.kafka.topic": "schema-changes.dbz_oracle",
          "database.history.kafka.bootstrap.servers": "kafka1:9092",
          "database.history.skip.unparseable.ddl": "true",
          "database.dbname": "ORCLCDB ",
          "database.pdb.name": "ORCLPDB1",
          "database.hostname": "oracle",
          "database.port": "1521",
          "database.user": "c##dbzuser",
          "database.password": "dbz",
          "database.connection.adapter": "logminer",
          "heartbeat.interval.ms": "3000",
          "log.mining.strategy": "online_catalog",
          "log.mining.scn.gap.detection.gap.size.min": "500000",
          "log.mining.view.fetch.size": "5000",
          "log.mining.batch.size.min": "500",
          "log.mining.batch.size.default": "10000",
          "log.mining.scn.gap.detection.time.interval.max.ms": "10000",
          "time.precision.mode": "connect",
          "tombstones.on.delete": "false",
          "decimal.handling.mode": "double",
          "lob.enabled": "true",
          "max.queue.size": "5000",
          "poll.interval.ms": "500",
          "producer.override.linger.ms": "5",
          "producer.override.batch.size": "163840"
        }
        

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)
      oracle 12c, on-premises, RAC, Non-CDB

      What behaviour do you expect?

      Lock happens doesn't show up on LOG_MINING_FLUSH table.

      Does not propagate to other connectors.

      What behaviour do you see?

      Sometimes, when very rare, LOG_MINING_FLUSH table occur rowlock issue.

      When that happen, then every oracle connectors, that connected on same oracle, are all stop.

      [2023-03-27 19:08:57,895] TRACE [dbz_oracle|task-0] executing 'UPDATE LOG_MINING_FLUSH SET LAST_SCN = 1990584' (io.debezium.jdbc.JdbcConnection:413)
      [2023-03-27 19:08:58,227] DEBUG [dbz_oracle|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:269)
      [2023-03-27 19:08:58,229] DEBUG [dbz_oracle|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:251)
      [2023-03-27 19:08:58,229] DEBUG [dbz_oracle|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:261)
      [2023-03-27 19:08:58,734] DEBUG [dbz_oracle|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:269)
      [2023-03-27 19:08:58,734] DEBUG [dbz_oracle|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:251)
      ...
      

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      • 2.2.0.Alpha3 have same issue

      Do you have the connector logs, ideally from start till finish?

      Please find attachment

              ccranfor@redhat.com Chris Cranford
              ppojin Hyojin Hwang (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: