-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
None
-
None
-
None
We have kafka compaction enabled for all the Debezium topics, so we don't loose any data related.
We stumbled upon a issue, when we enable compaction for history topic, Debezium fails to insert records into history topic and fails with following error.
========================================================================== Caused by: org.apache.kafka.connect.errors.ConnectException: Error recording the DDL statement(s) in the database history Kakfa topic debizium.db.apilogger-history:0 using brokers at null: CREATE TABLE `YYYYY` ( `AAAA` bigint(20) NOT NULL AUTO_INCREMENT, `VVV` varchar(255) NOT NULL, `CCC` int(11) NOT NULL, `FFF` varchar(255) DEFAULT NULL, `GGG` varchar(255) DEFAULT NULL, `OOOO` varchar(255) NOT NULL, `CCC` bigint(20) NOT NULL, `UUUU` mediumtext, `UUUU` int(11) DEFAULT '0', `HHHH` text, `NNNN` varchar(255) DEFAULT NULL, PRIMARY KEY (`pk_id`), KEY `ADCHA` (`app_id`), KEY `CSD` (`ipn_id`), KEY `ADAD` (`ipn_type`), KEY `AAD` (`status`), KEY `AAD` (`ipn_event`), KEY `AFAFD` (`event_time`), KEY `ADASD` (`state`) ) ENGINE=InnoDB AUTO_INCREMENT=83635728 DEFAULT CHARSET=utf8 at io.debezium.connector.mysql.MySqlSchema.applyDdl(MySqlSchema.java:387) at io.debezium.connector.mysql.BinlogReader.handleQueryEvent(BinlogReader.java:447) at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:313) ... 5 more Caused by: io.debezium.relational.history.DatabaseHistoryException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt. at io.debezium.relational.history.KafkaDatabaseHistory.storeRecord(KafkaDatabaseHistory.java:175) at io.debezium.relational.history.AbstractDatabaseHistory.record(AbstractDatabaseHistory.java:45) at io.debezium.connector.mysql.MySqlSchema.applyDdl(MySqlSchema.java:385) ... 7 more Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt. at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:70) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:57) at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25) at io.debezium.relational.history.KafkaDatabaseHistory.storeRecord(KafkaDatabaseHistory.java:165) ... 9 more Caused by: org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, or is otherwise corrupt. ===========================================================.
On a further note it sounds like its the issue mentioned here
https://issues.apache.org/jira/browse/KAFKA-4370
We worked around a bit, to disable compaction for history topic and re-submitting a connector would work.
Our Debezium set up is similar to described here.
https://wecode.wepay.com/posts/streaming-databases-in-realtime-with-mysql-debezium-kafka
When table is created for one database, Debezium was halted for all databases. Looks like history topic is not making use of gtid.source.includes.
- is related to
-
DBZ-241 Topic configuration requirements are not clearly documented
- Closed