-
Enhancement
-
Resolution: Obsolete
-
Major
-
None
-
None
-
None
When a connector running in schema_only mode fails for some reason and stays that way for a time period within which the binlog is purged by Mysql, the connector can't be restarted again and keep seeing the following exception:
org.apache.kafka.connect.errors.ConnectException: The connector is trying to read binlog starting at binlog file 'mysql-bin-changelog.000544', pos=128492709, skipping 2 events plus 1 rows, but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed. at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:102) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:141) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748)
The only workaround is to start the connector under a new name. As part of this enhancement, we should be able to start the connector in schema_only mode and when the binlog is purged.
- is related to
-
DBZ-220 Force DBZ to commit regularly
- Closed