Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-2289

Error in the connector after add/remove "autoReconnect" : "true" parameter

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Major
    • None
    • 1.2.0.Final
    • mysql-connector
    • None

    Description

      Hello,

      We've added the parameter "autoReconnect" : "true" in a connector because we had another issue that debezium recommended in the error to add this parameter but after that, the connector gave us the following error:

      Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table fca.users whose schema isn't known to this connector

      After removing the parameter the connector didn't recover.

      The connector description is:

      {
      "name": "fca_prd",
      "config":

      { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "snapshot.locking.mode": "none", "database.history.kafka.topic": "fca_prd.history", "drop.deletes": "false", "include.schema.changes": "true", "table.whitelist": "fca.conf_pay_elements,fca.conf_wage_type_category_pay_elements,fca.employments,fca.enterprise_store_communications,fca.enterprise_store_cost_assignments,fca.enterprise_store_deployments,fca.enterprise_store_personal_data,fca.enterprise_store_profiles,fca.fcs_countries,fca.fcs_exchange_rates,fca.gccs,fca.lccs,fca.pay_calendars,fca.pay_groups,fca.pay_periods,fca.wage_type_categories,fca.wage_type_report_sources,fca.users,fca.lccs_users,fca.glrep_file2,fca.glrep_rows_staging2", "decimal.handling.mode": "string", "_comment": "Kafka converter settings", "snapshot.new.tables": "parallel", "poll.interval.ms": "5000", "database.history.skip.unparseable.ddl": "true", "value.converter": "io.confluent.connect.avro.AvroConverter", "database.whitelist": "fca", "key.converter": "io.confluent.connect.avro.AvroConverter", "database.user": "debezium", "database.server.id": "17094", "database.history.kafka.bootstrap.servers": "hrxkfpdc01.hrx.erp:9092", "database.server.name": "fca_prd", "database.port": "3306", "value.converter.schema.registry.url": "http://hrxkfpdc01.hrx.erp:8081", "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter", "database.serverTimezone": "Europe/Brussels", "database.hostname": "pyxmypdb01.pyx.erp", "ddl.parser.mode": "antlr", "database.password": "*******", "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter", "name": "fca_prd", "connect.keep.alive": "true", "key.converter.schema.registry.url": "http://hrxkfpdc01.hrx.erp:8081", "drop.tombstones": "false", "snapshot.mode": "when_needed" }

      ,

       

      We've had stopped kafka services during 8 days last week due to another issue (it's solved)

       

      Thank you in advance.

      Attachments

        Activity

          People

            Unassigned Unassigned
            albertorod Roberto José Montero Segovia (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: