Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7274

The database schema history couldn't be recovered. Consider to increase the value for schema.history.internal.kafka.recovery.poll.interval.ms

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • under-triaging
    • 2.1.2.Final
    • db2-connector
    • None
    • False
    • None
    • False
    • Moderate

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      Debezium DB2

      What is the connector configuration?

      connector.class: ""
          database.hostname: ""
          database.port: ""
          database.dbname: ""
          database.user: ""
          database.password: 
          database.server.name: ""
          table.include.list: ""
          schema.history.internal.kafka.topic: "schemahistory.test"    schema.history.internal.kafka.bootstrap.servers: 
          topic.creation.default.replication.factor: 3
          offsets.topic.replication.factor: 3
          group.initial.rebalance.delay.ms: 3
          key.converter: "org.apache.kafka.connect.json.JsonConverter"    key.converter.schemas.enable: "false"    topic.prefix: 
          snapshot.mode: "initial"    time.precision.mode: "connect"    include.schema.changes: "false"    topic.creation.default.partitions: 3
          value.converter: "org.apache.kafka.connect.json.JsonConverter"    value.converter.schemas.enable: "false"    decimal.handling.mode: "string"    tombstones.on.delete: "false"    database.history.consumer.security.protocol: "SASL_SSL"    schema.history.internal.consumer.security.protocol: "SASL_SSL"    schema.history.internal.consumer.ssl.endpoint.identification.algorithm: "https"    schema.history.internal.consumer.sasl.mechanism: "PLAIN"
      schema.history.internal.kafka.recovery.poll.interval.ms: 5000 

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      DB2 on Microsoft windows OS

      What behaviour do you expect?

      Getting this error randomly :

      The database schema history couldn't be recovered. Consider to increase the value for schema.history.internal.kafka.recovery.poll.interval.ms

       

      What behaviour do you see?

      Because of timeout, connector is going down.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      We are on 2.1.2 Version

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      java.lang.IllegalStateException: The database schema history couldn't be recovered. Consider to increase the value for schema.history.internal.kafka.recovery.poll.interval.ms\n\tat io.debezium.storage.kafka.history.KafkaSchemaHistory.recoverRecords(KafkaSchemaHistory.java:313)\n\tat io.debezium.relational.history.AbstractSchemaHistory.recover(AbstractSchemaHistory.java:134)\n\tat io.debezium.relational.history.SchemaHistory.recover(SchemaHistory.java:152)\n\tat io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:62)\n\tat io.debezium.schema.HistorizedDatabaseSchema.recover(HistorizedDatabaseSchema.java:39)\n\tat io.debezium.connector.db2.Db2ConnectorTask.start(Db2ConnectorTask.java:82)\n\tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:136)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:274)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)\n\tat org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833 

      How to reproduce the issue using our tutorial deployment?

      If we restart the POD or if we restart multiple connector  together then we are seeing this error. Even randomly, we have seen this happening multiple time when there is no deployment.

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

            Unassigned Unassigned
            ratneshsahu14 Ratnesh Sahu
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: