-
Bug
-
Resolution: Done
-
Major
-
None
-
None
-
False
-
None
-
False
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
quay.io/debezium/connect:2.5.2.Final
What is the connector configuration?
{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "plugin.name": "pgoutput", "tasks.max": "1", "database.hostname": "db-pg", "database.port": "5432", "database.user": "postgres", "database.password": "postgres", "database.dbname" : "postgres", "topic.prefix": "inventory", "topic.creation.enable": "true", "topic.creation.default.replication.factor": "1", "topic.creation.default.partitions": "1", "schema.include.list": "inventory", "table.include.list": "inventory.customers,inventory.products", "snapshot.max.threads": "1", "publication.name":"debezium", "slot.name":"debezium", "value.converter": "io.debezium.converters.CloudEventsConverter", "value.converter.serializer.type" : "json", "value.converter.data.serializer.type" : "json", "signal.data.collection": "inventory.debezium_signal" } }
What is the captured database version and mode of depoyment?
PostgreSQL 14.10 - Docker (inventory image) and also on Azure Flexible server Postgres 15
What behaviour do you expect?
To execute a incremental snapshot with cloud event converter.
What behaviour do you see?
When running a incremental snapshot using the cloud event converter, the task stops with the following stack trace error:
2024-03-05 17:07:40,001 ERROR || WorkerSourceTask{id=inventory-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask] org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:230) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:156) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.convertTransformedRecord(AbstractWorkerSourceTask.java:494) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.sendRecords(AbstractWorkerSourceTask.java:402) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:367) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:77) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:236) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.NullPointerException at io.debezium.connector.postgresql.converters.PostgresCloudEventsMaker.ceId(PostgresCloudEventsMaker.java:27) at io.debezium.converters.CloudEventsConverter.convertToCloudEventsFormat(CloudEventsConverter.java:509) at io.debezium.converters.CloudEventsConverter.fromConnectData(CloudEventsConverter.java:298) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.lambda$convertTransformedRecord$6(AbstractWorkerSourceTask.java:494) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180) at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214) ... 13 more
If I run the same configuration without the cloud event converter the snapshot executes without problems.
The problem seems to be related with a null pointer at https://github.com/debezium/debezium/blob/d0f0761ca72c4d0d31f14b4c868f95ac163e78b9/debezium-connector-postgres/src/main/java/io/debezium/connector/postgresql/converters/PostgresCloudEventsMaker.java#L27
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Didn't had the opportunity to test with other versions, but it affected any 2.5 versions.
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
<Your answer>
How to reproduce the issue using our tutorial deployment?
- Include cloud events exporter in the connector
- Create debezium signal table
- initial snapshot with single table
- update configuration of connector to include additional table
- trigger a incremental snapshot for the added table
- links to
-
RHEA-2024:139598 Red Hat build of Debezium 2.5.4 release