-
Bug
-
Resolution: Done
-
Major
-
3.3.0.Alpha1
-
None
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
Postgres Connector on Debezium Server 3.3.0.Alpha1
What is the connector configuration?
debezium: source: connector: class: io.debezium.connector.postgresql.PostgresConnector offset: storage: ~: io.debezium.storage.jdbc.offset.JdbcOffsetBackingStore jdbc: table: name: debezium_offset_storage # connection: # url: <FROM_ENV> schema: history: internal: ~: io.debezium.storage.jdbc.history.JdbcSchemaHistory jdbc: table: name: debezium_schema_history # connection: # url: '<FROM_ENV>' key: converter: schemas: enable: false value: converter: schemas: enable: false database: sslmode: require # hostname: <FROM_ENV> # port: <FROM_ENV> # user: <FROM_ENV> # password: <FROM_ENV> # dbname: <FROM_ENV> plugin: name: pgoutput snapshot: mode: no_data publication: name: publication autocreate: mode: disabled table: include: list: | public.book, public.users, public.debezium_heartbeat slot: name: slot topic: prefix: postgres-connector heartbeat: interval: ms: 60000 action: query: "UPDATE debezium_heartbeat SET beat=RANDOM() WHERE id=1" # Transformations transforms: ~: reroute reroute: type: org.apache.kafka.connect.transforms.RegexRouter regex: .* replacement: evh-debezium-cdc sink: type: kafka kafka: producer: security: protocol: SASL_SSL sasl: mechanism: PLAIN # jaas: # config: <FROM_ENV> group: id: debezium-connect-cluster session: timeout: ms: 60000 key: serializer: org.apache.kafka.common.serialization.StringSerializer value: serializer: org.apache.kafka.common.serialization.StringSerializer # bootstrap: # servers: <FROM_ENV> quarkus: http: port: 8080 log: # level: <FROM_ENV> console: json: true
What is the captured database version and mode of deployment?
We are capturing CDC events from a Postgres 16 database using Debezium Server and sending the events to a kafka sink.
What behavior do you expect?
We expected heartbeats to be performed every heartbeat.interval.ms which in our case is 60 seconds. After emitting a heartbeat event, the connector would then run heartbeat.action.query. In our case the heartbeat table is configured for capturing and we would also see its event in our consumers.
What behavior do you see?
Debezium constantly runs the heartbeat.action.query to the point it sent 32 records in a single batch
"32 records sent during previous 00:00:10.674, last recorded offset of {server=postgres-database} partition is {lsn_proc=8019517304, messageType=UPDATE, lsn_commit=8019517304, lsn=8019517304, txId=40542, ts_usec=1754656514822948}"
We couldn't find as many plain heartbeat messages as the captured events from the heartbeat table but that could be because Debezium floods our consumer with events.
If I remove heartbeats entirely, then the connector works as expected and regular CDC events are not captured endlessly, so it seems that the hearbeat.action.query is running on loop.
Do you see the same behaviour using the latest released Debezium version?
No, the same configuration works as intended on Debezium 3.2.0.Final.
Do you have the connector logs, ideally from start till finish?
Yes, I will attach them as a separate file. Logs were collected with INFO level.
How to reproduce the issue using our tutorial deployment?
The tutorial deployment is using kafka connect and I am using Debezium Server. What I did is use the debezium-server docker image and start a connector with the configuration I've sent.
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
I have encountered this by accident while testing DBZ-9304 and was advised to open a new issue. There is some information about our setup there, including the Dockerfile we are using.
I don't know if this only happens when using Postgres Connector or with other connectors as well.
While this seems like DBZ-8551, I believe it is a different situation.
- impacts account
-
DBZ-9304 Debezium Server Azure Event Hubs sink duplicates all previous events
-
- Closed
-