Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8954

Oracle-collector cannot pause as expected

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Major Major
    • None
    • 2.7.5.Final, 3.1.0.Final
    • oracle-connector
    • None
    • False
    • Hide

      None

      Show
      None
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      2.7.0.Final and 3.1.0.Final

      What is the connector configuration?

      {
        "snapshot.locking.mode": "none",
        "connector.class": "io.debezium.connector.oracle.OracleConnector",
        "max.queue.size": "1024",
        "tasks.max": "1",
        "database.history.kafka.topic": "relation_cdc_server_1914869802812375042_history.DACOO",
        "transforms": "Reroute",
        "log.mining.transaction.retention.ms": "3600000",
        "schema.include.list": "DACOO",
        "log.mining.strategy": "online_catalog",
        "schema.history.internal.store.only.captured.databases.ddl": "true",
        "topic.prefix": "relation_cdc_server_1914869802812375042",
        "transforms.Reroute.topic.replacement": "relation_cdc_server_1914869802812375042_1914857663359942658_all",
        "decimal.handling.mode": "string",
        "schema.history.internal.kafka.topic": "relation_cdc_server_1914869802812375042_history.DACOO",
        "archive.log.hours": "24",
        "snapshot.include.collection.list": "16686234ae264619b11ef9d61695bfce",
        "database.dbname": "ORCL",
        "transforms.Reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
        "database.user": "**",
        "max.queue.size.in.bytes": "10485760",
        "database.history.kafka.bootstrap.servers": "10.106.253.24:9092",
        "database.url": "jdbc:oracle:thin:@${hostname}:${port}:${dbname}",
        "database.server.name": "relation_cdc_server_1914869802812375042",
        "heartbeat.interval.ms": "300000",
        "schema.history.internal.kafka.bootstrap.servers": "10.106.253.24:9092",
        "event.processing.failure.handling.mode": "warn",
        "transforms.Reroute.topic.regex": "(.*)",
        "schema.history.internal.skip.unparseable.ddl": "true",
        "database.port": "1521",
        "column.exclude.list": "",
        "errors.max.retries": "1",
        "log.mining.query.filter.mode": "in",
        "database.connectionTimeZone": "Asia/Shanghai",
        "database.hostname": "10.106.251.194",
        "database.password": "**",
        "name": "relation_cdc_server_1914869802812375042_1914857663359942658",
        "max.batch.size": "512",
        "table.include.list": "DACOO.local_test_1112_01",
        "snapshot.mode": "initial",
        "snowflakeId": "1914870638015741954"
      }

      What is the captured database version and mode of deployment?

      oracle 11 and oracle 19

      What behavior do you expect?

      oracle-collector can pause as expected,and the data written to the database during the pause period can be collected normally without losing data.

      What behavior do you see?

      1. create a collection task normally and confirm that the collection task can collect data normally. Pause the task at this time;
      2. the connector received a pause request, but did not actually stop the task;
      3. query the status of the connector, and the response result shows that the task has been paused;
      4. during the pause period, if there are database data updates, the changed data will be cached in the memory queue, but these data will not be pushed to Kafka;
      5. when the memory cache queue is full, the thread will wait for a period of time until it wakes up. If the operation connector starts at this time, the waiting thread will be interrupted, and some data that has not been written to the cache queue will not be collected;

      Do you see the same behaviour using the latest released Debezium version?

      yes

      Do you have the connector logs, ideally from start till finish?

      see attachment list

      How to reproduce the issue using our tutorial deployment?

      1. create connector,and confirm it can collect data normally;
      2. pause the collector for a period of time,and write some data to the database during this period, which needs to exceed the maximum capacity of the cache queue;
      3. resume connector, part of the data written to the database in step 2 will not be collected by the collector and sent to Kafka ;

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

       

              Unassigned Unassigned
              diumin qiumin xiang
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: