Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-9690

Alternate log transition happening without finishing the log mining session for the cycle causing Sequence not found issue

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • under-triaging
    • 3.1.1.Final
    • oracle-connector
    • None
    • True
    • Hide

      It stopped the application from flowing normally. Had to restart to get it working. Reason seems to be due to DB side logs showing that log miner 

      transition for a session WITHOUT completion of a sequence cycle, thereby giving rise to Sequence not found issue as already happened in DBZ-9445

      Show
      It stopped the application from flowing normally. Had to restart to get it working. Reason seems to be due to DB side logs showing that log miner  transition for a session WITHOUT completion of a sequence cycle, thereby giving rise to Sequence not found issue as already happened in DBZ-9445
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      We got the same issue as in DBZ-9445 where sequence number is not found. This time also alternate log found just like that and transitioned without waiting for a sequence session to complete, which seems to be one of the causes of the issues of this mess.

      IN GMT TIMEZONE

      "@timestamp":"2025-11-13T07:31:53.184Z","ecs.version":"1.2.0","log.level":"ERROR","message":"Mining session stopped due to error.","process.thread.name":"debezium-oracleconnector-server1-scan4-location-change-event-source-coordinator","log.logger":"io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource","dbz.connectorContext":"streaming","dbz.connectorName":"server1-scan4-location","dbz.connectorType":"Oracle","dbz.databaseName”:”MYDB”,”dbz.taskId":"0","error.type":"java.sql.SQLException","error.message":"ORA-00310: archived log contains sequence 57311; sequence 57309 required\nORA-00334: archived log: '+DATAC4/MYSERVER_TTC/ONLINELOG/group_3.306.1137059787'\n\nhttps://docs.oracle.com/error-help/db/ora-00310/","error.stack_trace":"java.sql.SQLException: ORA-00310: archived log contains sequence 57311; sequence 57309 required\nORA-00334: archived log: '+DATAC4/MYSERVER_TTC/ONLINELOG/group_3.306.1137059787'\n\nhttps://docs.oracle.com/error-help/db/ora-00310/\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:709)\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:609)\n\tat oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1347)\n\tat oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:1100)\n\tat oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:408)\n\tat

      As we can see in DB logs, always we are seeing sequence sessions end, but this time, just before "Alternate log found" message, we did not see the sequence number session end. 

      DB logs( In Central time zone)

      Note: These are in Central time zone while Java logs are in GMT timezone

      LOGMINER: Begin mining logfile for session -2145046526 thread 2 sequence 57309, +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

      2025-11-13T01:31:50.167662-06:00

      Thread 2 advanced to log sequence 57311 (LGWR switch),  current SCN: 15086953664

        Current log# 3 seq# 57311 mem# 0: +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

        Current log# 3 seq# 57311 mem# 1: +RECOC4/Server1_TTC/ONLINELOG/group_3.307.1137059787

      2025-11-13T01:31:51.744870-06:00

      ARC9 (PID:50560): Archived Log entry 182716 added for B-1137059200.T-2.S-57310 LOS:0x00000003833efc5e NXS:0x000000038340a4b7 NAB:1952721 ID 0x40742365 LAD:1

      2025-11-13T01:31:53.158163-06:00

      LOGMINER: Alternate logfile found.  Transition to mining archived logfile for session -2145046526 thread 2 sequence 57309, +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

      8:30

      LOGMINER: Region Queue size: 256

      LOGMINER: Redo Queue size: 4096

      LOGMINER: Memory Size = 15M, HWM 14M, LWM 13M, 90%

      LOGMINER: Memory Release Limit: 0M

      LOGMINER: Max Decomp Region Memory: 1M

      LOGMINER: Transaction Queue Size: 1024

      2025-11-13T01:31:25.614612-06:00

      LOGMINER: Begin mining logfile for session -2145046526 thread 1 sequence 86686, +RECOC4/Server1_TTC/ARCHIVELOG/2025_11_13/thread_1_seq_86686.704.1217035873

      2025-11-13T01:31:25.632789-06:00

      LOGMINER: Begin mining logfile for session -2145046526 thread 2 sequence 57308, +RECOC4/Server1_TTC/ARCHIVELOG/2025_11_13/thread_2_seq_57308.3758.1217035877

      2025-11-13T01:31:33.964657-06:00

      Thread 2 advanced to log sequence 57310 (LGWR switch),  current SCN: 15086845023

        Current log# 4 seq# 57310 mem# 0: +DATAC4/Server1_TTC/ONLINELOG/group_4.307.1137059787

        Current log# 4 seq# 57310 mem# 1: +RECOC4/Server1_TTC/ONLINELOG/group_4.308.1137059789

      2025-11-13T01:31:35.566786-06:00

      ARC8 (PID:50558): Archived Log entry 182712 added for B-1137059200.T-2.S-57309 LOS:0x00000003833e71ad NXS:0x00000003833efc5e NAB:2006064 ID 0x40742365 LAD:1

      2025-11-13T01:31:36.637039-06:00

      LOGMINER: End   mining logfile for session -2145046526 thread 1 sequence 86686, +RECOC4/Server1_TTC/ARCHIVELOG/2025_11_13/thread_1_seq_86686.704.1217035873

      2025-11-13T01:31:36.648282-06:00

      LOGMINER: Begin mining logfile for session -2145046526 thread 1 sequence 86687, +DATAC4/Server1_TTC/ONLINELOG/group_1.302.1137059201

      2025-11-13T01:31:49.647715-06:00

      LOGMINER: End   mining logfile for session -2145046526 thread 2 sequence 57308, +RECOC4/Server1_TTC/ARCHIVELOG/2025_11_13/thread_2_seq_57308.3758.1217035877

      2025-11-13T01:31:49.660137-06:00

      LOGMINER: Begin mining logfile for session -2145046526 thread 2 sequence 57309, +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

      2025-11-13T01:31:50.167662-06:00

      Thread 2 advanced to log sequence 57311 (LGWR switch),  current SCN: 15086953664

        Current log# 3 seq# 57311 mem# 0: +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

        Current log# 3 seq# 57311 mem# 1: +RECOC4/Server1_TTC/ONLINELOG/group_3.307.1137059787

      2025-11-13T01:31:51.744870-06:00

      ARC9 (PID:50560): Archived Log entry 182716 added for B-1137059200.T-2.S-57310 LOS:0x00000003833efc5e NXS:0x000000038340a4b7 NAB:1952721 ID 0x40742365 LAD:1

      2025-11-13T01:31:53.158163-06:00

      LOGMINER: Alternate logfile found.  Transition to mining archived logfile for session -2145046526 thread 2 sequence 57309, +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

      2025-11-13T01:32:08.207844-06:00

      Thread 2 advanced to log sequence 57312 (LGWR switch),  current SCN: 15087003880

        Current log# 4 seq# 57312 mem# 0: +DATAC4/Server1_TTC/ONLINELOG/group_4.307.1137059787

        Current log# 4 seq# 57312 mem# 1: +RECOC4/Server1_TTC/ONLINELOG/group_4.308.1137059789

      2025-11-13T01:32:09.769468-06:00

      ARC0 (PID:50489): Archived Log entry 182718 added for B-1137059200.T-2.S-57311 LOS:0x000000038340a4b7 NXS:0x00000003834168ad NAB:1833564 ID 0x40742365 LAD:1

      2025-11-13T01:32:34.784231-06:00

      Thread 2 advanced to log sequence 57313 (LGWR switch),  current SCN: 15087111420

        Current log# 3 seq# 57313 mem# 0: +DATAC4/Server1_TTC/ONLINELOG/group_3.306.1137059787

        Current log# 3 seq# 57313 mem# 1: +RECOC4/Server1_TTC/ONLINELOG/group_3.307.1137059787

      2025-11-13T01:32:36.364367-06:00

      ARC1 (PID:50507): Archived Log entry 182720 added for B-1137059200.T-2.S-57312 LOS:0x00000003834168ad NXS:0x0000000383430cfc NAB:2008537 ID 0x40742365 LAD:1

      2025-11-13T01:33:12.845009-06:00

      For other DB settings, please refer to DBZ-9445. Same settings.

      What Debezium connector do you use and what version?

      implementation 'io.debezium:debezium-api:3.1.1.Final'

      implementation'io.debezium:debezium-embedded:3.1.1.Final'

      implementation 'org.apache.kafka:connect-runtime:3.8.0'

      implementation 'io.debezium:debezium-connector-oracle:3.1.1.Final' 

      implementation "com.oracle.database.jdbc:ojdbc11:23.5.0.24.07"

      implementation 'io.debezium:debezium-storage-kafka:3.1.1.Final'

      What is the connector configuration?

      decimalHandlingMode: string

      name: <<NAME>>

      class: io.debezium.connector.oracle.OracleConnector

      offsetTopic: <<OFFSETTOPICNAME>>

      offsetTopicNumberOfPartitions: 1

      offsetTopicReplicationFactor: 1

      heartbeatIntervalInMsecs: 100000

      heartbeatQuery: SELECT * FROM MYTABLE WHERE ROWNUM <= 1

      database.query.timeout.ms: 600000

      .with("name", connectorName)

      .with("connector.class", connectorClass)

      .with("offset.storage", "org.apache.kafka.connect.storage.KafkaOffsetBackingStore") .with("offset.storage.topic", debeziumConnectorOffsetTopic) .with(DistributedConfig.OFFSET_STORAGE_PARTITIONS_CONFIG, debeziumConnectorOffsetTopicPartitions) .with(DistributedConfig.OFFSET_STORAGE_REPLICATION_FACTOR_CONFIG, debeziumConnectorOffsetTopicReplicationFactor)

      .with("offset.flush.interval.ms", "60000")

      .with("database.hostname", wmsDbHost)

      .with("database.port", port)

      .with("database.user", username)

      .with("database.password", password)

      .with("database.dbname", dbName)

      .with("schema.include.list", schemaList)

      .with("table.include.list", tableList)

      .with("include.schema.changes", "false")

      .with("topic.prefix", topicPrefix)

      .with("database.server.name", dbserverName) .with("snapshot.mode", snapshotMode) // It is set as initial

      .with("converter.schemas.enable", "false")

      .with("decimal.handling.mode", decimalHandlingMode)

      .with("heartbeat.interval.ms", heartbeatInterval)

      .with("heartbeat.action.query", heartbeatActionQuery)

      .with("database.query.timeout", 600000) 

      .with("schema.history.internal.kafka.topic", schemaTopic) .with("schema.history.internal.kafka.bootstrap.servers", schemaBootstrapServers) .with("schema.history.internal.consumer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.consumer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.consumer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.consumer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.consumer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.consumer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.consumer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.consumer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("schema.history.internal.producer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.producer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.producer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.producer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.producer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.producer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.producer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.producer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("bootstrap.servers", schemaBootstrapServers) .with("security.protocol", schemaSecurityProtocol)

      .with("ssl.keystore.location", schemaSslKeystoreLocation)

      .with("ssl.keystore.password", schemaSslKeystorePassword)

      .with("ssl.truststore.location", schemaSslTruststoreLocation)

      .with("ssl.truststore.password", schemaSslTruststorePassword) .with("ssl.endpoint.identification.algorithm", sslEndpointAlgorithm)

      /** In case of Patching scenarios where one node is down and the other runs, need to give enough
      time for redo threads of the downed node to be archived and sync up the data with the surviving node. This is done based on suggestion from ccranfor@redhat.com 
      */
      .with(OracleConnectorConfig.LOG_MINING_LOG_QUERY_MAX_RETRIES.name(), logMiningLogQueryMaxRetries)
      .with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_INITIAL_DELAY_MS.name(), logMiningBackOffInitialDelayInMs)
      .with(OracleConnectorConfig.LOG_MINING_LOG_BACKOFF_MAX_DELAY_MS.name(), logMiningBackOffMaxDelayInMs)

      .with(OracleConnectorConfig.ARCHIVE_DESTINATION_NAME, archiveDestinationName)
      .with(OracleConnectorConfig.LOG_MINING_QUERY_FILTER_MODE, OracleConnectorConfig.LogMiningQueryFilterMode.IN.name())
      .with(OracleConnectorConfig.LOG_MINING_STRATEGY, OracleConnectorConfig.LogMiningStrategy.ONLINE_CATALOG.name())
      .with(OracleConnectorConfig.LOG_MINING_BATCH_SIZE_DEFAULT, logMiningBatchSizeDefault)

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      Data base in running on a data center and is an Oracle database of version 19c.

      DB Server on Dev Database runs on a 2 node RAC. The Archived log files (which Log Miner reads from) are shared to both nodes in RAC as a shared path.

      What behavior do you expect?

      We expect that Sequence number should be found as it is not physically deleted from the database server archive logs.

      What behavior do you see?

      It somehow can't find the sequence number as described above.

      Do you see the same behaviour using the latest released Debezium version?

      Didn't try it with any version other than 3.1.1. This is happening only for the 2nd time in 2-3 months after moving to 3.1.1.Final version

      Do you have the connector logs, ideally from start till finish?

      Only INFO logs

      How to reproduce the issue using our tutorial deployment?

      N/A

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      It will make sure that continuous flow of data is not disrupted.

      Implementation ideas (optional)

      First may need to identify why it happened. Hopefully DB logs will be helpful in this regard.

              Unassigned Unassigned
              kennyk2024 Kenny Kuriakose
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: