-
Bug
-
Resolution: Obsolete
-
Blocker
-
None
-
2.5.0.Final
-
None
-
False
-
-
False
-
Critical
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
2.5
What is the connector configuration?
Here are the table DDL, archive log and the kafka message for reference:
client_id_0_archive_logs_scn_3235181181.csv![]()
client_id_0_kafka_msg_after.json![]()
client_id_0_kafka_vs_redo_vs_coldef.csv![]()
What is the captured database version and mode of deployment?
(E.g. on-premises, with a specific cloud provider, etc.)
Oracle v19.0 hosted on AWS EC2
What behavior do you expect?
The archive log should be parsed properly with values for every column
What behavior do you see?
Some of the columns are being set to null. In fact all the column values up until a specific point are parsed and set properly. Only say the last half of the columns are set to null - in the order of insert.
Community thread for reference: community link
Do you see the same behaviour using the latest released Debezium version?
Have not verified yet. In fact we are no able to reproduce this issue. It happens very rarely - say one in may be 100m events. But happens on only a specific table. It has around 300 columns. We tried to create a new connector to process from that scn and this time the connector parsed all the column values and produced a valid kafka message.
Do you have the connector logs, ideally from start till finish?
We do not have trace logs enabled in production. As this happens very rarely, we are unable to put a time against the same to capture logs for a specific time. We will still try to get the logs and add here once we have them.
How to reproduce the issue using our tutorial deployment?
Unable to reproduce
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
<Your answer>
Implementation ideas (optional)
<Your answer>
- is incorporated by
-
DBZ-8747 A transaction mined across two queries can randomly cause unsupported operations
-
- Closed
-