Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8291

MySQL Connector Does Not Act On `CREATE DATABASE` Records In The Binlog

XMLWordPrintable

    • False
    • None
    • False
    • Critical

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      debezium-connector-mysql-2.7.3.Final

      What is the connector configuration?

       

      {
      "connector.class": "io.debezium.connector.mysql.MySqlConnector",
      "database.hostname": "REDACTED",
      "database.include.list": "prefix_*",
      "database.password": "REDACTED",
      "database.port": "REDACTED",
      "database.server.id": "REDACTED",
      "database.user": "REDACTED",
      "errors.log.enable": "true",
      "errors.tolerance": "all",
      "include.schema.changes": "false",
      "key.converter": "org.apache.kafka.connect.json.JsonConverter",
      "key.converter.schemas.enable": "false",
      "name": "REDACTED",
      "schema.history.internal.consumer.sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule REDACTED;",
      "schema.history.internal.consumer.sasl.mechanism": "SCRAM-SHA-512",
      "schema.history.internal.consumer.security.protocol": "SASL_SSL",
      "schema.history.internal.consumer.ssl.endpoint.identification.algorithm": "https",
      "schema.history.internal.kafka.bootstrap.servers": "REDACTED",
      "schema.history.internal.kafka.topic": "REDACTED",
      "schema.history.internal.producer.sasl.jaas.config": "org.apache.kafka.common.security.scram.ScramLoginModule REDACTED;",
      "schema.history.internal.producer.sasl.mechanism": "SCRAM-SHA-512",
      "schema.history.internal.producer.security.protocol": "SASL_SSL",
      "schema.history.internal.producer.ssl.endpoint.identification.algorithm": "https",
      "schema.history.internal.store.only.captured.tables.ddl": "true",
      "snapshot.include.collection.list": "prefix_*",
      "snapshot.mode": "when_needed",
      "table.include.list": "REDACTED",
      "topic.creation.default.partitions": "3",
      "topic.creation.default.replication.factor": "-1",
      "topic.prefix": "REDACTED",
      "transforms": "Reroute, CustomAttributeValuesPartitionRouting, CustomAttributeValueOptionsPartitionRouting",
      "transforms.CustomAttributeValueOptionsPartitionRouting.partition.payload.fields": "change.custom_attribute_value_id",
      "transforms.CustomAttributeValueOptionsPartitionRouting.partition.topic.num": "3",
      "transforms.CustomAttributeValueOptionsPartitionRouting.type": "io.debezium.transforms.partitions.PartitionRouting",
      "transforms.CustomAttributeValuesPartitionRouting.partition.payload.fields": "change.attributable_id, change.attributable_type",
      "transforms.CustomAttributeValuesPartitionRouting.partition.topic.num": "3",
      "transforms.CustomAttributeValuesPartitionRouting.type": "io.debezium.transforms.partitions.PartitionRouting",
      "transforms.Reroute.topic.regex": ".*",
      "transforms.Reroute.topic.replacement": "REDACTED",
      "transforms.Reroute.type": "io.debezium.transforms.ByLogicalTableRouter",
      "value.converter": "org.apache.kafka.connect.json.JsonConverter",
      "value.converter.schemas.enable": "false"
      } 
      

       

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      RDS MySQL 8.0.35

      What behavior do you expect?

      When a new database is created matching the `database.include.list` regex, that operation and subsequent `CREATE TABLE` operations in the binlog will result in the population of the `schema.history.internal.kafka.topic` for those tables, allowing for subsequent data collection without switching the `snapshot.mode` to `recovery` and without the associated data loss from the connector skipping inserts into tables in the new database between db creation and connector restart.

      What behavior do you see?

      No ddls are emitted to the `schema.history.internal.kafka.topic`, resulting in the connector crashing the next time data is written to the new database after it is restarted.

      Do you see the same behaviour using the latest released Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      2.7.3.Final is the latest 2.x version to my knowledge

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      We can produce them as needed.

      How to reproduce the issue using our tutorial deployment?

       

      # mysql-connector.json
      
      {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector",
        "database.hostname": "mysql",
        "database.port": "3306",
        "database.user": "mysqluser",
        "database.password": "mysqlpw",
        "database.server.id": "184054",
        "database.server.name": "mysql-server",
        "database.include.list": "prefix_*",
        "include.schema.changes": "false",
        "schema.history.internal.kafka.bootstrap.servers": "kafka:9092",
        "schema.history.internal.kafka.topic": "schema-changes",
        "snapshot.mode": "when_needed",
        "table.include.list": "*",
        "topic.prefix": "mysql_connector"
      }
      # docker-compose.yaml
      
      services:
        zookeeper:
          image: quay.io/debezium/zookeeper:${DEBEZIUM_VERSION}
          ports:
           - 2181:2181
           - 2888:2888
           - 3888:3888
        kafka:
          image: quay.io/debezium/kafka:${DEBEZIUM_VERSION}
          ports:
           - 9092:9092
          links:
           - zookeeper
          environment:
           - ZOOKEEPER_CONNECT=zookeeper:2181
        mysql:
          image: quay.io/debezium/example-mysql:${DEBEZIUM_VERSION}
          ports:
           - 3306:3306
          environment:
           - MYSQL_ROOT_PASSWORD=debezium
           - MYSQL_USER=mysqluser
           - MYSQL_PASSWORD=mysqlpw
        connect:
          image: quay.io/debezium/connect:${DEBEZIUM_VERSION}
          ports:
           - 8083:8083
          links:
           - kafka
           - mysql
          environment:
           - BOOTSTRAP_SERVERS=kafka:9092
           - GROUP_ID=1
           - CONFIG_STORAGE_TOPIC=my_connect_configs
           - OFFSET_STORAGE_TOPIC=my_connect_offsets
           - STATUS_STORAGE_TOPIC=my_connect_statuses
           - MYSQL_USER=mysqluser
           - MYSQL_PASSWORD=mysqlpw
      # test.sh
      
      #!/usr/bin/env bash
      # Set Debezium version
      export DEBEZIUM_VERSION=2.7
      # Start the environment
      docker-compose -f docker-compose.yaml up -d
      # Give services time to start up
      sleep 60
      # Register the Debezium MySQL connector
      curl -i -X PUT -H "Accept:application/json" -H  "Content-Type:application/json" http://localhost:8083/connectors/mysql-connector/config -d @mysql-connector.json
      # Create a new database in MySQL
      docker-compose -f docker-compose-mysql.yaml exec mysql bash -c 'mysql -uroot -p$MYSQL_ROOT_PASSWORD -e "CREATE DATABASE prefix_newdb;"'sleep 5
      # Verify schema changes in Kafka (e.g., using kafka-console-consumer)
      docker-compose -f docker-compose-mysql.yaml exec kafka /kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --from-beginning --timeout-ms 5000 --topic schema-change
      

              Unassigned Unassigned
              davis.st.aubin@reciprocity.com Anthony Davis St. Aubin (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: