Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
None
-
False
Description
Bug report
What Debezium connector do you use and what version?
io.debezium.connector.postgresql.PostgresConnector
debezium/connect 2.2 (via Docker debezium/connect:latest)
What is the connector configuration?
{ "name": "debezium-source", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "tasks.max": 1, "topic.prefix": "cdc", "database.hostname": "postgres", "database.port": "5432", "database.user": "postgres", "database.password": "that-default-postgres-password", "database.dbname": "capture_me", "database.server.name": "capture_me", "transforms": "unwrap", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState", "transforms.unwrap.add.fields": "op,table,lsn,source.ts_ms,source.sequence,source.schema,source.txId,source.xmin,db,transaction.id,transaction.total_order,transaction.data_collection_order", "transforms.unwrap.add.headers": "db", "transforms.unwrap.delete.handling.mode": "rewrite", "transforms.unwrap.drop.tombstones": "true", "provide.transaction.metadata": "true", "offset.flush.interval.ms": "0", "max.batch.size": "4096", "max.queue.size": "16384" } }
What is the captured database version and mode of deployment?
Docker compose deployment with `debezium/connect` and `debezium/postgres:11`
What behaviour do you expect?
Note the configuration of: `"provide.transaction.metadata": "true"` and `"offset.flush.interval.ms": "0"`
We expect to see `BEGIN` and `END` events when transactions are made in Postgres.
What behaviour do you see?
When a single insert, wrapped in a transaction, is made, only the bare event is received.
Changing the above configuration to: `"offset.flush.interval.ms": "100"`
`BEGIN` and `END` transaction events are correctly received. If the transaction behavior is necessarily conflated with `offset.flush.interval.ms` it would be helpful to document this.
Do you see the same behaviour using the latest release Debezium version?
Yes
Do you have the connector logs, ideally from start till finish?
Can be easily obtained
How to reproduce the issue using our tutorial deployment?
Tail logs from kafka consumer, create a table in Postgres DB and produce a transaction to demonstrate.
CREATE TABLE mine ( id SERIAL PRIMARY KEY, count INTEGER, created_at TIMESTAMP DEFAULT NOW() ); BEGIN; INSERT INTO mine(count) VALUES(777); COMMIT;