Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-615

Decimal datatype DDL issues

    XMLWordPrintable

Details

    • Bug
    • Resolution: Done
    • Minor
    • 0.7.4
    • 0.7.2
    • mysql-connector
    • None
    • Hide

      Two cases (one failure, one correct), both using the Debezium MySQL connector:

      (1) Correct example:
      Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given)
      Start mysql-connector
      Insert data into the table
      -> Observed behaviour: Data is correctly written to Kafka, Avro-schema is correctly created in schemaregistry

      (2) Failure case:
      Start mysql-connector
      Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given, but this also fails when no scale is given via DECIMAL(20) for example)
      Insert data into the table
      -> Observed behaviour: DataError in org.apache.kafka.connect.data.Decimal.fromLogical() since the scale of the Value is "0" while the scale of the schema is "-1"

      Show
      Two cases (one failure, one correct), both using the Debezium MySQL connector: (1) Correct example: Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given) Start mysql-connector Insert data into the table -> Observed behaviour: Data is correctly written to Kafka, Avro-schema is correctly created in schemaregistry (2) Failure case: Start mysql-connector Create table with "CREATE TABLE a (x decimal);" (NOTE: no precision/scale for decimal is given, but this also fails when no scale is given via DECIMAL(20) for example) Insert data into the table -> Observed behaviour: DataError in org.apache.kafka.connect.data.Decimal.fromLogical() since the scale of the Value is "0" while the scale of the schema is "-1"

    Description

      Hey everyone,

      I just stumbled upon a weird quirk which caused issues with the Schemaregistry & Avro-Converter when a Decimal datatype is encountered on MySQL. Not sure if this is a bug or intended behaviour, but it seems very strange to me so I've decided to at least inform you.

      It seems like there is a difference in behaviour when creating schemata for tables, depending on whether the connector "snapshots" the table or reads the DDL from the binlog.

      When creating a new table in MySQL that contains a DECIMAL column with no specified precision and scale, MySQL will default to DECIMAL(10, 0). However, if the Debezium connector is already running and recording this event, it will create a schema with a scale of "-1". This will cause an error later on when serializing the value as Avro in org.apache.kafka.connect.data.Decimal.fromLogical().

      This does not happen if Debezium reads the table definition during a snapshot since it then correctly reads the column as having a scale/precision of (10, 0).

      Now, this issue can easily be circumvented by always specifying precision and scale, or by forcing a new snapshot after creating the table. However, it seems weird that Debezium would use a different scale (-1) than the MySQL default (0) if no scale is explicitly given.

      Cheers
      Felix

      Attachments

        Activity

          People

            jpechane Jiri Pechanec
            mrtrustworthy Felix Eggert (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: