-
Bug
-
Resolution: Done
-
Major
-
0.9.2.Final
-
None
I think the outcome of this is that - if you have a table which does not contain a PK, the snapshot will (silently) fail to write data to kafka; however, streams will pick up new data. This was the behavior I observed anyway.
For me, the lack of a PK was a pebak error though (I intended to have a PK on the table), and I did not investigate further. I can say however, that removing the check for key == null in the snapshot producer did result in data getting into kafka
- is related to
-
DBZ-1225 Handle tables without PK consistently across all relational connectors
- Closed