-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
1.8.0.Final
-
False
-
None
-
False
-
Moderate
Bug report
What Debezium connector do you use and what version?
MySQL Connector 1.8.0.Final
What is the connector configuration?
{ "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "poll.interval.ms": "1000", "max.batch.size": "100000", "max.queue.size": "400000", "database.hostname": <mysql_server_name>, "database.port": <mysql_server_port>, "database.user": <debezium_user>, "database.password": <debezium_password>, "database.server.id": "112233", "database.server.name": "mysql_test", "table.include.list": "test.orders_test", "snapshot.mode": "schema_only", "time.precision.mode": "connect", "include.query": "false", "include.schema.changes": "false", "tombstones.on.delete": "false", "transforms": "unwrap", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState", "transforms.unwrap.drop.tombstones": "true", "transforms.unwrap.delete.handling.mode": "rewrite", "transforms.unwrap.add.fields": "op,source.ts_ms", "database.history.kafka.bootstrap.servers": <kafka_hosts>, "database.history.kafka.topic": "dbz-mysql_test-history", "database.history.skip.unparseable.ddl": "true", "key.converter": "io.confluent.connect.avro.AvroConverter", "key.converter.schema.registry.url": <schema_registry_url>, "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": <schema_registry_url> }
What is the captured database version and mode of depoyment?
5.7.36-39-log, on-premises
What behaviour do you expect?
Statement should be parsed successfully without errors.
What behaviour do you see?
Ignoring unparseable statements 'ALTER TABLE orders_test ADD COLUMN row_number BIGINT NOT NULL AUTO_INCREMENT, MODIFY COLUMN end_date datetime DEFAULT null, DROP PRIMARY KEY, ADD PRIMARY KEY (row_number), ALGORITHM=COPY, LOCK=SHARED' stored in database history: {} [io.debezium.relational.history.KafkaDatabaseHistory] io.debezium.text.ParsingException: DDL statement couldn't be parsed. Please open a Jira issue with the statement 'ALTER TABLE orders_test ADD COLUMN row_number BIGINT NOT NULL AUTO_INCREMENT, MODIFY COLUMN end_date datetime DEFAULT null, DROP PRIMARY KEY, ADD PRIMARY KEY (row_number), ALGORITHM=COPY, LOCK=SHARED' no viable alternative at input 'ALTER TABLE orders_test\r\nADD COLUMN row_number' at io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:43) at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41) at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544) at org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310) at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:136) at io.debezium.ddl.parser.mysql.generated.MySqlParser.sqlStatements(MySqlParser.java:1219) at io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:941) at io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:73) at io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:45) at io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:82) at io.debezium.relational.history.AbstractDatabaseHistory.lambda$recover$2(AbstractDatabaseHistory.java:146) at io.debezium.relational.history.KafkaDatabaseHistory.recoverRecords(KafkaDatabaseHistory.java:311) at io.debezium.relational.history.AbstractDatabaseHistory.recover(AbstractDatabaseHistory.java:112) at io.debezium.relational.history.DatabaseHistory.recover(DatabaseHistory.java:158) at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:62) at io.debezium.schema.HistorizedDatabaseSchema.recover(HistorizedDatabaseSchema.java:38) at io.debezium.connector.mysql.MySqlConnectorTask.validateAndLoadDatabaseHistory(MySqlConnectorTask.java:369) at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:108) at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:186) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:241) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.antlr.v4.runtime.NoViableAltException at org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:2026) at org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:467) at org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:393) at io.debezium.ddl.parser.mysql.generated.MySqlParser.sqlStatements(MySqlParser.java:1017) ... 21 more
I believe the exception happened due to the use of a MySQL reserved word - "row_number" as a column name.
We saw a similar exception after creating a new table with "row_number" column as well.
So this happens both for CREATE TABLE and ALTER TABLE statements.
Do you see the same behaviour using the latest relesead Debezium version?
Didn't try
Do you have the connector logs, ideally from start till finish?
Please see above message from the log
How to reproduce the issue using our tutorial deployment?
N/A