Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5256

Oracle LogMiner may fail with an in-progress transaction in an archive log that has been deleted


      During the Oracle connector's start-up, we examine the V$TRANSACTION table and optionally examine the archive logs to determine where we should start streaming from when in-progress transactions are detected.

      This process can be problematic if the database uses an aggressive retention policy and the DBA does not properly clean up archive logs using RMAN. For example, if they use the file system's rm command to remove the files but then do not perform a consistency check with Oracle, then the entries in V$ARCHIVED_LOG will say the files exist when they do not and cause the connector to attempt to mine back farther than it should.

      Obviously, the best solution is for the DBA to always make sure they perform the right operations when removing logs, but the real question is whether there is any way in which we can mitigate this ourselves.

      One option might be to leverage UTL_FILE.FGETATTR function to see whether the log file actually exists and if it does not, handle this gracefully. One way would be during the snapshot to streaming transition, if such a log is no longer available, we gracefully discard that transaction. This would only apply however to that small window of dealing with in-progress transactions, at the snasphot boundary and wouldn't cover the situation where logs were improperly dealt with when the connector has moved to streaming mode.

            ccranfor@redhat.com Chris Cranford
            ccranfor@redhat.com Chris Cranford
            0 Vote for this issue
            3 Start watching this issue