Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-2767

MS SQL Decimal with default value not matching the scale of the column definition cause exception

    XMLWordPrintable

Details

    • False
    • False
    • Undefined
    • Hide

      1 create table with a row

      -- Enable MS SQL CDC
      exec sys.sp_cdc_enable_db
      go
      
      -- create table
      create table TEST
      ( pk int not null constraint TEST_pk primary key nonclustered, 
        value decimal(18,4) default 0)
      go
      
      -- Insert a record
      insert into TEST values (2, 2);
      go

      2. Configure the connector

      name=cg-test-connector
      connector.class=io.debezium.connector.sqlserver.SqlServerConnector
      database.hostname=127.0.0.1
      database.port=1433
      database.user=<<dbuser>>
      database.password=<<password>>
      database.dbname=MyTestDB
      database.server.name=cgTest
      table.include.list=dbo.TEST
      database.history.kafka.bootstrap.servers=localhost:9092
      database.history.kafka.topic=cgTest
      

      3. Run Debezium and get 

      Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:187)
      org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:196)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:314)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:340)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
              at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
              at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:834)
      Caused by: org.apache.kafka.connect.errors.DataException: BigDecimal has mismatching scale value for given Decimal schema
              at org.apache.kafka.connect.data.Decimal.fromLogical(Decimal.java:68)
              at org.apache.kafka.connect.json.JsonConverter$13.toJson(JsonConverter.java:209)
              at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:617)
              at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:463)
              at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:439)
              at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:439)
              at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:592)
              at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:346)
              at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:314)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)
      

       

       

       

       

      Show
      1 create table with a row -- Enable MS SQL CDC exec sys.sp_cdc_enable_db go -- create table create table TEST ( pk int not null constraint TEST_pk primary key nonclustered, value decimal(18,4) default 0) go -- Insert a record insert into TEST values (2, 2); go 2. Configure the connector name=cg-test-connector connector.class=io.debezium.connector.sqlserver.SqlServerConnector database.hostname=127.0.0.1 database.port=1433 database.user=<<dbuser>> database.password=<<password>> database.dbname=MyTestDB database.server.name=cgTest table.include.list=dbo.TEST database.history.kafka.bootstrap.servers=localhost:9092 database.history.kafka.topic=cgTest 3. Run Debezium and get  Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:187) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:196)        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)        at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:314)        at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:340)        at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)        at java.base/java.lang. Thread .run( Thread .java:834) Caused by: org.apache.kafka.connect.errors.DataException: BigDecimal has mismatching scale value for given Decimal schema        at org.apache.kafka.connect.data.Decimal.fromLogical(Decimal.java:68)        at org.apache.kafka.connect.json.JsonConverter$13.toJson(JsonConverter.java:209)        at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:617)        at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:463)        at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:439)        at org.apache.kafka.connect.json.JsonConverter.asJsonSchema(JsonConverter.java:439)        at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:592)        at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:346)        at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)        at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:314)        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)        

    Description

      When trying to replicate a table with Decimal(18,4) with a default at 0, when table is extracted, exception "BigDecimal has mismatching scale value for given Decimal schema" is raised.

      I suspect that the issue is around the default value at 0, which is considered as a BigDecimal with a zero scale, while the schema defines it with a scale of 4.

      Debugging org.apache.kafka.connect.data..Decimal before the error occurs shows the content of screenshot Selection_126.png attached here

      Attachments

        Activity

          People

            Unassigned Unassigned
            thierry@avanco.be Thierry De Leeuw (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: