Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7563

Debezium mysql disconnected from binlog but task still running

    XMLWordPrintable

Details

    • False
    • None
    • False
    • Important

    Description

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      MySQL Source connector 2.4.2

      What is the connector configuration?

       

      {
          "connector.class": "io.debezium.connector.mysql.MySqlConnector",
          "snapshot.locking.mode": "minimal",
          "topic.creation.default.partitions": "-1",
          "schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"${file:/kafka/config/secrets.properties:kafka.key}\" password=\"${file:/kafka/config/secrets.properties:kafka.secret}\";",
          "transforms": "reroute",
          "include.schema.changes": "true",
          "schema.history.internal.producer.security.protocol": "SASL_SSL",
          "topic.creation.default.replication.factor": "-1",
          "key.converter": "org.apache.kafka.connect.storage.StringConverter",
          "schema.history.internal.producer.sasl.mechanism": "PLAIN",
          "database.user": "${file:/kafka/config/secrets.properties:db.user}",
          "transforms.reroute.topic.replacement": "el8\\.cdc\\.hippo\\.table\\.$1",
          "schema.history.internal.consumer.ssl.endpoint.identification.algorithm": "https",
          "schema.history.internal.kafka.bootstrap.servers": "<redacted>",
          "schema.history.internal.skip.unparseable.ddl": "true",
          "topic.creation.enable": "false",
          "key.converter.schemas.enable": "false",
          "schema.history.internal.producer.ssl.endpoint.identification.algorithm": "https",
          "producer.override.max.request.size": "2097152",
          "errors.max.retries": "5",
          "database.password": "${file:/kafka/config/secrets.properties:db.password}",
          "value.converter.schemas.enable": "false",
          "name": "hippo-connector",
          "schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"${file:/kafka/config/secrets.properties:kafka.key}\" password=\"${file:/kafka/config/secrets.properties:kafka.secret}\";",
          "schema.history.internal.consumer.sasl.mechanism": "PLAIN",
          "errors.tolerance": "none",
          "max.batch.size": "4096",
          "snapshot.mode": "schema_only",
          "schema.history.internal.consumer.security.protocol": "SASL_SSL",
          "max.queue.size": "16384",
          "tasks.max": "1",
          "transforms.reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
          "schema.history.internal.store.only.captured.tables.ddl": "false",
          "provide.transaction.metadata": "true",
          "tombstones.on.delete": "false",
          "topic.prefix": "production-datawarehouse-streaming-hippo-db",
          "decimal.handling.mode": "string",
          "schema.history.internal.kafka.topic": "el8.cdc.hippo-datawarehouse-streaming.ddl-changes",
          "transforms.reroute.topic.regex": "production-datawarehouse-streaming-hippo-db\\.el8_app_1\\.(.*)",
          "value.converter": "org.apache.kafka.connect.json.JsonConverter",
          "snapshot.include.collection.list": "el8_app_1\\..*",
          "database.server.id": "203907778",
          "time.precision.mode": "connect",
          "database.server.name": "production-datawarehouse-streaming-hippo-db",
          "offset.flush.timeout.ms": "60000",
          "database.port": "3306",
          "offset.flush.interval.ms": "10000",
          "table.field.event.type": "name",
          "database.hostname": "[redacted]",
          "table.include.list": "el8_app_1\\.[redacted],el8_app_1\\.[redacted],[...700 tables]",
          "database.include.list": "el8_app_1"
      }

       

      What is the captured database version and mode of depoyment?

      Aurora 2 MySQL 5.7

      What behaviour do you expect?

      Task disconnected from binlog, retrying to reconnect. If successful, resume, if unsuccessful, show error and stop task.

      What behaviour do you see?

      Task disconnected from binlog, retrying to reconnect. No error. Just seeing all nodes disconnected and no activity but task is still showing as running.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Haven't had chance to try

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      Attached

      How to reproduce the issue using our tutorial deployment?

      Issue seems intermittent. The connector was running for many hours then choked around 12AM and wouldn't process any new record until the pod was restarted. No issues observed since.

      This might be related to underlying issue in mysql-binlog-connector-java library

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            passuied Patrick Assuied
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: