[2021-08-09 13:12:26,080] INFO WorkerSourceTask{id=GE_prod-0} flushing 22792 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:441) [2021-08-09 13:12:29,312] ERROR Mining session stopped due to the {} (io.debezium.connector.oracle.logminer.LogMinerHelper:86) java.lang.IllegalStateException: None of log files contains offset SCN: 911363146383, re-snapshot is required. at io.debezium.connector.oracle.logminer.LogMinerHelper.setLogFilesForMining(LogMinerHelper.java:67) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.initializeRedoLogsForMining(LogMinerStreamingChangeEventSource.java:222) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:133) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:54) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:172) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2021-08-09 13:12:29,312] ERROR Producer failure (io.debezium.pipeline.ErrorHandler:31) java.lang.IllegalStateException: None of log files contains offset SCN: 911363146383, re-snapshot is required. at io.debezium.connector.oracle.logminer.LogMinerHelper.setLogFilesForMining(LogMinerHelper.java:67) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.initializeRedoLogsForMining(LogMinerStreamingChangeEventSource.java:222) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:133) at io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource.execute(LogMinerStreamingChangeEventSource.java:54) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:172) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:134) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2021-08-09 13:12:29,312] INFO startScn=911363146383, endScn=911363166383 (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:160) [2021-08-09 13:12:29,313] INFO Streaming metrics dump: OracleStreamingChangeEventSourceMetrics{currentScn=912162649907, oldestScn=911363143558, committedScn=911363146380, offsetScn=911363126383, logMinerQueryCount=2, totalProcessedRows=5980052, totalCapturedDmlCount=5968875, totalDurationOfFetchingQuery=PT2H53M34.165S, lastCapturedDmlCount=5968875, lastDurationOfFetchingQuery=PT2H53M32.612S, maxCapturedDmlCount=5968875, maxDurationOfFetchingQuery=PT2H53M32.612S, totalBatchProcessingDuration=PT0S, lastBatchProcessingDuration=PT0S, maxBatchProcessingDuration=PT0S, maxBatchProcessingThroughput=0, currentLogFileName=[Ljava.lang.String;@54e481f3, minLogFilesMined=0, maxLogFilesMined=2, redoLogStatus=[Ljava.lang.String;@7f8a1600, switchCounter=548, batchSize=21000, millisecondToSleepBetweenMiningQuery=600, hoursToKeepTransaction=1, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200, totalParseTime=PT2M5.775S, totalStartLogMiningSessionDuration=PT32.791S, lastStartLogMiningSessionDuration=PT32.791S, maxStartLogMiningSessionDuration=PT32.791S, totalProcessTime=PT2H54M12.107S, minBatchProcessTime=PT0S, maxBatchProcessTime=PT0S, totalResultSetNextTime=PT7M25.978S, lagFromTheSource=DurationPT2H54M38.438S, maxLagFromTheSourceDuration=PT2H54M38.438S, minLagFromTheSourceDuration=PT0S, lastCommitDuration=PT0.005S, maxCommitDuration=PT24M5.023S, activeTransactions=7, rolledBackTransactions=3, committedTransactions=5582, abandonedTransactionIds=[], rolledbackTransactionIds=[10000a0043881a00, 3a0015004af92100, 34001d007a1b1b00], registeredDmlCount=5968864, committedDmlCount=5298679, errorCount=1, warningCount=0, scnFreezeCount=0, unparsableDdlCount=0, miningSessionUserGlobalAreaMemory=62483880, miningSessionUserGlobalAreaMaxMemory=96041256, miningSessionProcessGlobalAreaMemory=143466408, miningSessionProcessGlobalAreaMaxMemory=143466408} (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:161) [2021-08-09 13:12:29,313] INFO Offsets: OracleOffsetContext [scn=911363126383] (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource:162) [2021-08-09 13:12:29,313] INFO Finished streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:173) [2021-08-09 13:12:29,313] INFO Connected metrics set to 'false' (io.debezium.pipeline.metrics.StreamingChangeEventSourceMetrics:70) [2021-08-09 13:12:29,419] INFO WorkerSourceTask{id=GE_prod-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:424) [2021-08-09 13:12:29,419] ERROR Invalid call to OffsetStorageWriter flush() while already flushing, the framework should not allow this (org.apache.kafka.connect.storage.OffsetStorageWriter:109) [2021-08-09 13:12:29,419] ERROR WorkerSourceTask{id=GE_prod-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:186) org.apache.kafka.connect.errors.ConnectException: OffsetStorageWriter is already flushing at org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111) at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:436) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:255) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2021-08-09 13:12:29,419] ERROR WorkerSourceTask{id=GE_prod-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:187) [2021-08-09 13:12:29,419] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask:240) [2021-08-09 13:12:29,449] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:964) [2021-08-09 13:12:29,450] INFO [Producer clientId=dbz_GE_prod-dbhistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1182) [2021-08-09 13:12:29,452] INFO [Producer clientId=connector-producer-GE_prod-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1182) [2021-08-09 13:12:31,081] ERROR WorkerSourceTask{id=GE_prod-0} Failed to flush, timed out while waiting for producer to flush outstanding 20091 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:446) [2021-08-09 13:12:31,081] ERROR WorkerSourceTask{id=GE_prod-0} Failed to commit offsets (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:116) [2021-08-09 13:12:59,452] INFO [Producer clientId=connector-producer-GE_prod-0] Proceeding to force close the producer since pending requests could not be completed within timeout 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1208) [2021-08-09 13:12:59,453] ERROR WorkerSourceTask{id=GE_prod-0} failed to send record to dbz_GE_prod.IBS.Z_RECORDS: (org.apache.kafka.connect.runtime.WorkerSourceTask:352) org.apache.kafka.common.KafkaException: Producer is closed forcefully. at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766) at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279) at java.lang.Thread.run(Thread.java:748) [2021-08-09 13:12:59,453] ERROR WorkerSourceTask{id=GE_prod-0} failed to send record to dbz_GE_prod.IBS.Z_RECORDS: (org.apache.kafka.connect.runtime.WorkerSourceTask:352) org.apache.kafka.common.KafkaException: Producer is closed forcefully. at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766) at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279) at java.lang.Thread.run(Thread.java:748)