-
Feature Request
-
Resolution: Unresolved
-
Major
-
2.7.0.Alpha2
-
None
-
False
-
-
False
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
Debezium JDBC Sink connector
{ "class": "io.debezium.connector.jdbc.JdbcSinkConnector", "type": "sink", "version": "2.7.0.Alpha2" }
What is the connector configuration?
{ "name": "sink-local-cdc-with-test-database", "config": { "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector", "tasks.max": "3", "connection.url": "jdbc:postgresql://postgres/postgres", "connection.username": "postgres", "connection.password": "postgres", "insert.mode": "upsert", "delete.enabled": "false", "primary.key.mode": "record_value", "database.time_zone": "UTC", "primary.key.fields": "web_id", "topics": "cdc.public.a_table" , "table.name.format": "public.b_table", "schema.evolution": "none", "errors.tolerance": "all", "errors.deadletterqueue.topic.name": "dlq-topic", "errors.retry.timeout": "150000", "errors.retry.delay.max.ms": "30000" } }
What is the captured database version and mode of depoyment?
RDS: Postgres 13.13
What behaviour do you expect?
Bad Sink Records that encounter SQL exception sent to `DLQ`
What behaviour do you see?
Tasks gets killed and will not recover until manually restarted.
Do you see the same behaviour using the latest relesead Debezium version?
Yes. 2.7.0.Alpha2
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
yes.
2024-05-16 21:22:22,979 ERROR || WorkerSinkTask{id=sink-local-cdc-with-test-database-2} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask] org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception. at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:632) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:350) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:250) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:219) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:236) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:96) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601) ... 11 more Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffer(JdbcChangeEventSink.java:217) at io.debezium.connector.jdbc.JdbcChangeEventSink.lambda$flushBuffers$2(JdbcChangeEventSink.java:195) at java.base/java.util.HashMap.forEach(HashMap.java:1337) at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffers(JdbcChangeEventSink.java:195) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:156) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:103)
How to reproduce the issue using our tutorial deployment?
For testing purpose create 2 tables.
source table
CREATE TABLE IF NOT EXISTS public.a_table ( web_id bigint NOT NULL, version integer NOT NULL DEFAULT 0, date_updated timestamp with time zone NOT NULL DEFAULT now(), date_created timestamp with time zone NOT NULL DEFAULT now(), item character varying(40) COLLATE pg_catalog."default", CONSTRAINT pk_a_table PRIMARY KEY (web_id) )
Destination table: contains unique_version constraint
CREATE TABLE IF NOT EXISTS public.b_table ( web_id bigint NOT NULL, version integer NOT NULL DEFAULT 0, date_updated timestamp with time zone NOT NULL DEFAULT now(), date_created timestamp with time zone NOT NULL DEFAULT now(), item character varying(40) COLLATE pg_catalog."default", CONSTRAINT pk_a_table PRIMARY KEY (web_id), CONSTRAINT unique_version UNIQUE (version) )
Source Connector
{ "name": "local-cdc-with-test-database", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "slot.name": "dbz_full_publication", "slot.drop.on.stop": "false", "publication.name": "dbz_full_publication", "plugin.name": "pgoutput", "database.server.name": "postgres", "database.dbname": "postgres", "database.hostname": "postgres", "database.port": "5432", "database.user": "postgres", "database.password": "postgres", "table.whitelist": "public.a_table", "topic.prefix": "cdc", "include.schema.changes": "false", "snapshot.mode": "never", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter": "org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable": "true", "value.converter.schemas.enable": "true", "message.key.columns": "test.public.a_table:web_id", "hstore.handling.mode": "json", "tombstones.on.delete": "false", "transforms": "unwrap", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState" }}
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
For many reasons Sinking of records to database can fail. having retries with backoff and deadletter queue will avoid task failures.
Implementation ideas (optional)
Conflunece JDBC sink implemented retries