-
Bug
-
Resolution: Done
-
Minor
-
0.7.2
-
None
Hey everyone,
I just stumbled upon a weird quirk which caused issues with the Schemaregistry & Avro-Converter when a Decimal datatype is encountered on MySQL. Not sure if this is a bug or intended behaviour, but it seems very strange to me so I've decided to at least inform you.
It seems like there is a difference in behaviour when creating schemata for tables, depending on whether the connector "snapshots" the table or reads the DDL from the binlog.
When creating a new table in MySQL that contains a DECIMAL column with no specified precision and scale, MySQL will default to DECIMAL(10, 0). However, if the Debezium connector is already running and recording this event, it will create a schema with a scale of "-1". This will cause an error later on when serializing the value as Avro in org.apache.kafka.connect.data.Decimal.fromLogical().
This does not happen if Debezium reads the table definition during a snapshot since it then correctly reads the column as having a scale/precision of (10, 0).
Now, this issue can easily be circumvented by always specifying precision and scale, or by forcing a new snapshot after creating the table. However, it seems weird that Debezium would use a different scale (-1) than the MySQL default (0) if no scale is explicitly given.
Cheers
Felix