Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-3276

dbz-oracle-connector lost data on the Increment data when use logminer 11g

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Major Major
    • None
    • 1.4.2.Final
    • oracle-connector
    • None
    • False
    • False
    • Undefined

      Hi ,we have use mysql-connector in production env more then one year,

      it works well, Thank you very much!

       

      Now ,I am work on the task to get the cdc data from oracle realtime.

      I am testing by the logminer mode.

      But when I insert massive data to Oracle for the test,

      the data sent to the kafka by logminer were lost 

      As there is no error, I can only show my env, setting and config

      ---------my test env--------------

      env : a standby server

              core 32, more then 500g memory

      ps: only one node ,not cluster

      ---------------Oracle-------------------

      Oracle 11g  EE

      redo log file size  12g 

      6 groups, but I didn't set multiplexing

      I didn't set  supplemental log data  on the database level

      I set supplemental log on the specail tables like this

      ALTER TABLE CUSER.t_order_detail_log ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS

      ------------some config I set manually-----------------

      kafka: jvm  max 6g

      kafka-connect: jvm max 6g

      ----kafka.properties

      log.retention.bytes=536870912000

      log.retention.hours=168

      ----kafka-conenct.properties

      key.converter=io.confluent.connect.avro.AvroConverter
      value.converter=io.confluent.connect.avro.AvroConverter

      producer.buffer.memory=33554432

      producer.batch.size=327680

      producer.compression.type=lz4

      consumer.heartbeat.interval.ms=6000
      consumer.max.poll.records=10000
      consumer.max.partition.fetch.bytes=52428800
      consumer.partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor

      — connector json

      "connector.class" : "io.debezium.connector.oracle.OracleConnector",
      "tasks.max" : "1",
      "database.server.name" : "**",
      "database.hostname" : "**",
      "database.port" : "1539",
      "database.user" : "logminer",
      "database.password" : "logminer",
      "database.dbname" : "**",
      "database.tablename.case.insensitive": "true",
      "database.oracle.version": "11",
      "database.history.kafka.bootstrap.servers" : "****",
      "database.history.kafka.topic": "schema-changes.cuser",
      "database.connection.adapter": "logminer",
      "schema.include.list":"FOPTRADE,CUSER",
      "table.include.list":"CUSER.T_CLOSING_ACCOUNT_POS,CUSER.T_ORDER....."

      ----- my Data insertion simulation---

      ------table basic insert num---------
      table.CUSER.T_CLOSING_ACCOUNT_POS=100000 
      table.CUSER.T_ORDER_DETAIL_LOG=17000
      table.CUSER.T_CLOSING_ACCOUNT_GREEKS=10000
      table.CUSER.T_CLOSING_ACCOUNT_SELL_OP=10000 
      table.CUSER.B_CLOSING_ACCOUNT=6500 
      table.CUSER.T_GUARANTEE_HIST=6500 
      table.CUSER.T_GUARANTEE_LOG=4000 

        1. data insert run time, unit minute 
          generator.runtime=1 
        2. data insert interval ,unit second
          generator.interval=20 
        3. data insert threads
          generator.thread_num=3 
          ##base_row_factor is Supported for decimals 
          generator.base_row_factor=5  # table insert num multiply this param

      -------------------------------------------------------

      In my test ,the data generator will insert 3830000 rows into oracle in 1 minute

      I have test for 1 ,3 ,4, 7 minutes

      so 3830000~ about  2000w rows  insert in a short time.

      I have test more then 10 times 

      80% or more probability will lose data ,

      the lost num is not too much, like this

      oralce kafka
      18865000 18863767
      3430000 3429970
      3430000 3429978
      3430000 3429892

      when I debugge the dbz-oracle-connector,

      Small amount of data , it works well without loss,

      mass amount of data ,hard to trace.

      ----------------

      Is there any parameter or config I need to adjust?

      Shall anyone can give me some guide to research?

              Unassigned Unassigned
              cyhome110@163.com cui yi (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: