-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
None
-
False
What Debezium connector do you use and what version?
Postgres Connector provided with debezium/connect:2.6.0.Final
What is the connector configuration?
{ "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "database.user": "debezium", "database.dbname": "app", "slot.name": "debezium_slot", "publication.name": "cdc_publication", "database.server.name": "<server>", "plugin.name": "pgoutput", "database.port": "6432", "topic.prefix": "<server>", "database.hostname": "<master fqdn>", "database.password": "<password>", "name": "<server>", "incremental.snapshot.chunk.size": "65000", "batch.size": "16777216", "max.batch.size": "15728640", "max.queue.size": "1073741824", "max.queue.size.in.bytes": "4294967296", "linger.ms": "5000", "buffer.memory": "2147483648", "producer.override.max.request.size": "20971520", "snapshot.mode": "never", "signal.data.collection": "cdc.debezium_signal", "signal.enabled.channels": "source,kafka", "signal.kafka.bootstrap.servers": "cdc-debezium-kafka-brokers.infra-cdc-debezium:9092", "signal.kafka.topic": "debezium.signals", "signal.consumer.sasl.jaas.config": "<sasl_jaas_conf>", "signal.consumer.sasl.mechanism": "SCRAM-SHA-512", "signal.consumer.security.protocol": "SASL_PLAINTEXT", "heartbeat.interval.ms": "60000", "topic.creation.default.partitions": "6", "topic.creation.default.replication.factor": "3", "heartbeat.action.query": "update cdc.debezium_heartbeat set last_heartbeat_ts = NOW() where 1=1;", "database.initial.statements":"update cdc.debezium_heartbeat set last_heartbeat_ts = NOW() where 1=1;", "transforms": "unwrap", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState", "transforms.unwrap.delete.handling.mode": "rewrite", "transforms.unwrap.add.fields": "op,table,source.ts_ms", "event.processing.failure.handling.mode": "warn" }
What is the captured database version and mode of depoyment?
Does not matter. Yandex Cloud PostgreSQL 14
What behaviour do you expect?
Connector processed my stop-snapshot signal from kafka, then do not continue snapshot after startup
What behaviour do you see?
Snapshot continued right after start and immediately crashed connector due to "io.debezium.DebeziumException" after snapshot query execution
Do you see the same behaviour using the latest relesead Debezium version?
Yes, 2.6.0.Final
Do you have the connector logs, ideally from start till finish?
- Connector starts
- Snapshot continues
- Connector failed, restarting after
Caused by: io.debezium.DebeziumException: Database error while executing incremental snapshot for table 'DataCollection{id=public.changelogs, additionalCondition=, surrogateKey=}'
How to reproduce the issue using our tutorial deployment?
We need somehow make crash of execute on snapshot prepared statement. For example, modify connect-offsets for connector and pass wrong type of primary key