While taking a mysql snapshot of a large table, we believe we ran into the out of memory issue described by
Prior to merging PR #2084 the mysql connector would pass a the `DEFAULT_SNAPSHOT_FETCH_SIZE` value (hard coded to Integer.MIN_VALUE) to the parent config: https://github.com/debezium/debezium/pull/2084/files#diff-c7bb1d95805a401c3f2657f6086cc953e6209cfcaaa3303ad155f022df8d8c84L980
Per the comments listed in the `createStatementWithLargeResultSet()` method and related mysql driver docs, the `Integer.MIN_VALUE` is required to switch from loading all results into memory to instead load row-by-row. Any other value appears to be ignored and results in loading all records: https://github.com/debezium/debezium/blob/cb2b2fc07a07cdf3c62422f3b74ab34d156fd298/debezium-connector-mysql/src/main/java/io/debezium/connector/mysql/MySqlSnapshotChangeEventSource.java#L577-L600
To avoid the OOM conditions the code needs to set `Integer.MIN_VALUE` again.
Here's a couple approaches to fix this:
- Update the code to pass the `MySqlConnectorConfig.DEFAULT_SNAPSHOT_FETCH_SIZE` down the config inheritance chain (as was done prior to the rewrite).
- In `createStatementWithLargeResultSet()` hardcode `stmt.setFetchSize` to `Integer.MIN_VALUE` since any other valid is invalid.
- Related to this, perhaps the `snapshot.fetch.size` config option for mysql should be removed entirely since it is misleading at best, or documented to indicate that it should not be used unless you do want to load all records into memory.