Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8308

History Topic is getting Filled to Soon - consuming lot of storage

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • under-triaging
    • 3.0.0.Final
    • db2-connector
    • None
    • False
    • None
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      3.0 that is 3.0.0.Final

      What is the connector configuration?

      curl -X POST\

       -H "Content-Type: application/json"\

       --data '

       {

       "name": “<name of connector>”,

       "config":

      {  "connector.class": "io.debezium.connector.db2.Db2Connector",  "database.hostname": "<DB URL>",  "database.port": “<port>”,  "database.user": "<username>",  "database.password": "<Password>"  "database.dbname": "VODSP0:sslConnection=true;sslCertLocation=/kafka/connect/debezium-connector-db2/db2_ssl_keydb.arm;",  "topic.prefix": “<topic_prefix>”,  "table.include.list": “<schema>.<tablename>”,  "schema.history.internal.kafka.bootstrap.servers": "<Broker>",  "schema.history.internal.kafka.topic": "test-history-topic”,  "schema.history.internal.producer.security.protocol": "SASL_SSL",  "schema.history.internal.producer.ssl.protocol": "TLSv1.2",  "schema.history.internal.producer.ssl.enabled.protocols": "TLSv1.2",  "schema.history.internal.producer.ssl.endpoint.identification.algorithm": "HTTPS",  "schema.history.internal.consumer.security.protocol": "SASL_SSL",  "schema.history.internal.consumer.ssl.protocol": "TLSv1.2",  "schema.history.internal.consumer.ssl.enabled.protocols": "TLSv1.2",  "schema.history.internal.consumer.ssl.endpoint.identification.algorithm": "HTTPS",  "schema.history.internal.producer.sasl.mechanism": "PLAIN",  "schema.history.internal.consumer.sasl.mechanism": "PLAIN",  "schema.history.internal.kafka.recovery.poll.interval.ms" : 1000,  "tombstones.on.delete": "FALSE",  "snapshot.mode": "schema_only",  "snapshot.isolation.mode": "read_committed",  "key.converter": "io.confluent.connect.avro.AvroConverter",  "key.converter.schema.registry.url": "http://schema-rreg-service:8087",  "value.converter": "io.confluent.connect.avro.AvroConverter",  "value.converter.schema.registry.url": "http://schema-reg-service:8087",  "schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"token\" password=\"<Password>\";",  "schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"token\" password=\"<Password>\";",  "time.precision.mode": "connect"  }

       }' http://kafka-connect-srvc:8081/connectors

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      DB2 13

      What behavior do you expect?

      Connector is fetching the data but at the same time History topic is also getting records at a very past phase - which is consuming the memory. Where as we expect only 1 recrod per table in History Topic

      What behavior do you see?

      History Topic consuming lot of space

      Do you see the same behaviour using the latest released Debezium version?

      Yes

      Do you have the connector logs, ideally from start till finish?

      No - I can just see the data getting filled in History Topic

      How to reproduce the issue using our tutorial deployment?

      Create a connector and check the History Topic data

              jpechane Jiri Pechanec
              harinapk11@gmail.com Harina Pujar (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: