Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
False
-
None
-
False
-
Important
Description
Hello Red Hat
After upgrading to debezium 2.0 we keep receiving the max_allowed_packet error. The connectors keep throwing the following error:
log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event 'mysql-bin.034401' at 46164857, the last event read from '/var/lib/mysql/logbin/mysql-bin.034401' at 79132302, the last byte read from '/var/lib/mysql/logbin/mysql-bin.034401' at 79132321. Error code: 1236; SQLSTATE: HY000.
log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event 'mysql-bin.035512' at 4830044, the last event read from '/var/lib/mysql/logbin/mysql-bin.035513' at 85327549, the last byte read from '/var/lib/mysql/logbin/mysql-bin.035513' at 85327568. Error code: 1236; SQLSTATE: HY000.cd
The configuration on MySQL has already maxed the configuration of max_allowed_packet, so it doesn't make sense for debezium to keep failing for this reason.
We are attaching the binlogs with a password zip that are failing to help the investigation. Please, let us know a secure way to share this password.
In the other hand, the connectors are now returning this error, are not downloading data, but the connectors keeps having both processes working. Is this a new functionality? Isn't there any method that if kafka is not downloading data for a long time, that the connector fail or how are we supposed to identify that a connector is not correctly working?
Kind regards.