-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
We are using Debezium DB2 connector and the version is 2.1.2.Final
What is the connector configuration?
There are 5 connectors for 5 DB2 Databases and all those 5 connectors are using the same schema history topic. Within few 2-3 months the schema history topic size has grown to around 1.5GB. The connector configs for one of the connector are below
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: kafka-connect-cluster name: db2-connector-testdb namespace: kafka-connect spec: class: io.debezium.connector.db2.Db2Connector config: database.dbname: TESTDB database.history.consumer.security.protocol: SASL_SSL database.hostname: ****** database.password: ******** database.port: '50000' database.server.name: dbserver1 database.user: dbserver1u1 decimal.handling.mode: string group.initial.rebalance.delay.ms: 3 include.schema.changes: 'false' key.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: 'false' offsets.topic.replication.factor: 3 schema.history.internal.consumer.sasl.jaas.config: ********* schema.history.internal.consumer.sasl.mechanism: PLAIN schema.history.internal.consumer.security.protocol: SASL_SSL schema.history.internal.consumer.ssl.endpoint.identification.algorithm: https schema.history.internal.kafka.bootstrap.servers: *************:9092 schema.history.internal.kafka.recovery.poll.interval.ms: 300000 schema.history.internal.kafka.topic: test.ingestion.cdc.schemahistory.sipsd schema.history.internal.producer.sasl.jaas.config: ********** schema.history.internal.producer.sasl.mechanism: PLAIN schema.history.internal.producer.security.protocol: SASL_SSL schema.history.internal.producer.ssl.endpoint.identification.algorithm: https snapshot.mode: initial table.include.list: TESTDB.TABLE_ONE, TESTDB.TABLE_TWO, TESTDB.TABLE_THREE,.....TESTDB.TABLE_TEN, time.precision.mode: connect tombstones.on.delete: 'false' topic.creation.default.partitions: 3 topic.creation.default.replication.factor: 3 topic.prefix: test.ingestion.cdc value.converter: org.apache.kafka.connect.json.JsonConverter value.converter.schemas.enable: 'false' tasksMax: 1
What is the captured database version and mode of deployment?
DB2 V11.5 and deployment is within EC2 Instance
What behavior do you expect?
The schema history topic size should not grown that much and the connector should start up when it is restarted.
What behavior do you see?
`java.lang.IllegalStateException: The database history couldn't be recovered. Consider to increase the value for database.history.kafka.recovery.poll.interval.ms`
Do you see the same behaviour using the latest released Debezium version?
Not updated to latest version
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
<Your answer>
How to reproduce the issue using our tutorial deployment?
<Your answer>
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>