-
Bug
-
Resolution: Done
-
Major
-
2.4.0.Beta2
-
None
-
False
-
None
-
False
error1:
"transforms": "tz",
"transforms.tz.type": "io.debezium.transforms.TimezoneConverter",
"transforms.tz.converted.timezone": "+08:00",
"transforms.tz.include.fields": "source:time_field_test:datetime_column,source:time_field_test:timestamp_column,source:time_field_test:created_at,source:time_field_test:updated_at"
The complete error message is as follows:
ERROR [mysql-dw-field-type-test|task-0] WorkerSourceTask{id=mysql-dw-field-type-test-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:208)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:237)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:159)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.convertTransformedRecord(AbstractWorkerSourceTask.java:502)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.sendRecords(AbstractWorkerSourceTask.java:397)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:360)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:256)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:76)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.DataException: Conversion error: null value for field that is required and has no default value
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:569)
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:662)
at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:550)
at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:304)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:64)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.lambda$convertTransformedRecord$9(AbstractWorkerSourceTask.java:502)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:183)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:217)
... 12 more
error2:
"transforms.tz.include.fields": "source:time_field_test:date_column"
ERROR [mysql-dw-field-type-test|task-0] WorkerSourceTask{id=mysql-dw-field-type-test-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:208)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:237)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:159)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.convertTransformedRecord(AbstractWorkerSourceTask.java:502)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.sendRecords(AbstractWorkerSourceTask.java:397)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:360)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:256)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:76)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.DataException: Conversion error: null value for field that is required and has no default value
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:569)
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:662)
at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithEnvelope(JsonConverter.java:550)
at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:304)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:64)
at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.lambda$convertTransformedRecord$9(AbstractWorkerSourceTask.java:502)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:183)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:217)
... 12 more
Provide table creation statements and insert statements:
CREATE TABLE time_field_test (
id INT(11) NOT NULL AUTO_INCREMENT,
date_column DATE DEFAULT '2020-01-01',
year_column YEAR DEFAULT 2020,
time_column TIME DEFAULT '00:00:00',
datetime_column DATETIME DEFAULT CURRENT_TIMESTAMP,
timestamp_column TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO time_field_test
(date_column, year_column, time_column, datetime_column, timestamp_column, created_at, updated_at)
VALUES('2023-01-01', '2023', '15:00:00', CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP);
For more details, you can read the chat room:
- links to
-
RHEA-2024:129636 Red Hat build of Debezium 2.5.4 release