-
Bug
-
Resolution: Done
-
Major
-
None
-
None
-
False
-
False
-
Undefined
-
-
Hi there.
We are testing Debezium internally with some existing Oracle 19 database. We are using Logminer and the Debezium Oracle Connector.
Somehow, last week we got into a state where we would always hit a "missing log file" error immediately after installing the connector. The error message was very similar to the reported in DBZ-2759 but the connector wouldn't really seem to even have the chance to start. Here is the stack trace for reference:
[2020-12-14 21:06:19,488] ERROR Mining session stopped due to the {} (io.debezium.connector.oracle.logminer.LogMinerHelper:515) java.sql.SQLException: ORA-01291: missing log file ORA-06512: at "SYS.DBMS_LOGMNR", line 72 ORA-06512: at line 1 at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509) at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:461) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1104) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:553) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:269) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:655) at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:265) at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:86) at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:965) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1205) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3666) at oracle.jdbc.driver.T4CCallableStatement.executeInternal(T4CCallableStatement.java:1358) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3778) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:4251) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1081) at io.debezium.connector.oracle.logminer.LogMinerHelper.executeCallableStatement(LogMinerHelper.java:571) at io.debezium.connector.oracle.logminer.LogMinerHelper.startLogMining(LogMinerHelper.java:251) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:180)
After forking the connector, digging in the source and adding some extra tracing logs, we finally found what was going on. Essentially it looks like at startup a list of logging files is grabbed from the last SCN offset, but that list does not necessarily match the current log files if you DB team has been doing some changes. As the current log files doesn't change either, then the Debezium will not detect a log file switch and will not trigger a reload. Hence the current log files are never added to the LogMiner session and the next time you get to a SCN living within the current log file you end up with the "missing log file" exception.
We verified that by adding the following code in LogMinerHelper seems to fix this issue as it also considers current log files when reloading, but I have the feeling that this might not cover all the possible edge cases there, so rather than sending a PR I've decided to just open this bug and get some feedback here. Also the null metrics parameter kind of sucks there, but was really meant to quickly prove the solution.
List<String> currentFileNames = getCurrentRedoLogFiles(connection, null).stream() .filter(e -> !logFilesNames.contains(e)).collect(Collectors.toList()); if (currentFileNames.size() > 0){ logFilesNames.addAll(currentFileNames); }
Cheers,
Martin
- is related to
-
DBZ-2994 Handle large SCN values in Offsets and SourceInfo block
- Closed