-
Bug
-
Resolution: Done
-
Major
-
0.8.3.Final
-
None
Seems we have run into an issue when we have changed the default value of our source mysql table column to a value not valid for avro it seems. The exact ddl statement issued is recorded in the dbhistory topic:
dbhistory topic content:
{ "source" : { "server" : "test" }, "position" : { "ts_sec" : 1542210683, "file" : "mysql-bin.000289", "pos" : 1017928405, "server_id" : 3401, "event" : 1 }, "databaseName" : "test", "ddl" : "alter table joel2 modify d decimal(10,5) default 0.0 not null" }
However, the ddl should be the same as what the mysql server implicitly converts:
mysql- [test]> show create table joel2; +-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | joel2 | CREATE TABLE `joel2` ( `a` int(11) NOT NULL, `b` text, `c` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `d` decimal(10,5) NOT NULL DEFAULT '0.00000', PRIMARY KEY (`a`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | +-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec)
Error (from schema registry?) when debezium tries to evolve the schema:
{ "name": "testSchemaChanges-source3", "connector": { "state": "RUNNING", "worker_id": "172.16.230.69:8083" }, "tasks": [ { "state": "FAILED", "trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:269)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.kafka.connect.errors.DataException: BigDecimal has mismatching scale value for given Decimal schema\n\tat org.apache.kafka.connect.data.Decimal.fromLogical(Decimal.java:68)\n\tat io.confluent.connect.avro.AvroData$5.convert(AvroData.java:261)\n\tat io.confluent.connect.avro.AvroData.defaultValueFromConnect(AvroData.java:951)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:856)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:685)\n\tat io.confluent.connect.avro.AvroData.addAvroRecordField(AvroData.java:925)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:832)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:820)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:685)\n\tat io.confluent.connect.avro.AvroData.addAvroRecordField(AvroData.java:925)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:832)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:685)\n\tat io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:680)\n\tat io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:325)\n\tat io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:75)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:269)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t... 11 more\n", "id": 0, "worker_id": "172.16.230.67:8083" } ], "type": "source" }
Very easy to reproduce:
- create table t1 (a int, b decimal);
- insert into t1 values (1,1.1);
- start debezium connector
- alter table t1 modify b decimal(10,5) default 0.0 not null;
- insert into t1 (a) values (2);
- connector should now be in a failed state
- is cloned by
-
DBZ-2668 BigDecimal has mismatching scale value for given Decimal schema error due to permissive sqlserver ddl
- Resolved