2023-09-13 21:48:27,146 INFO Oracle|vk_nau56|streaming LogMiner session has exceeded maximum session time of 'Optional[PT2M]', forcing restart. [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:48:34,747 INFO || [Producer clientId=connector-producer-vk_nau56_src-0] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:48:40,745 INFO || [AdminClient clientId=naument--shared-admin] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:50:29,071 INFO Oracle|vk_nau56|streaming LogMiner session has exceeded maximum session time of 'Optional[PT2M]', forcing restart. [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:50:46,348 INFO || [Producer clientId=naument--statuses] Node 2 disconnected. [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:50:59,172 INFO || [AdminClient clientId=connector-adminclient-vk_nau56_src-0] Node 3 disconnected. [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:52:33,642 INFO Oracle|vk_nau56|streaming LogMiner session has exceeded maximum session time of 'Optional[PT2M]', forcing restart. [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:53:40,840 INFO || [AdminClient clientId=naument--shared-admin] Node 1 disconnected. [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:54:35,574 INFO Oracle|vk_nau56|streaming LogMiner session has exceeded maximum session time of 'Optional[PT2M]', forcing restart. [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:09,486 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,490 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,490 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,492 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink/status HTTP/1.1" 200 2228 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,493 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,495 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,495 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,497 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,498 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,499 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,500 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink/tasks/0/status HTTP/1.1" 200 2117 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,501 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:09,501 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:09 +0000] "GET /connectors/vk_nau56_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:15,975 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:15 +0000] "GET /connectors/vk_nau56_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:15,975 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:15 +0000] "GET /connectors/vk_nau56_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:15,977 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:15 +0000] "GET /connectors/vk_nau56_sink/status HTTP/1.1" 200 2228 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:15,978 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:15 +0000] "GET /connectors/vk_nau56_sink/tasks/0/status HTTP/1.1" 200 2117 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,496 INFO || Successfully processed removal of connector 'vk_nau56_sink' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2023-09-13 21:55:20,496 INFO || [Worker clientId=connect-1, groupId=naument] Connector vk_nau56_sink config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,498 INFO || [Worker clientId=connect-1, groupId=naument] Handling connector-only config update by stopping connector vk_nau56_sink [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,498 INFO || Stopping connector vk_nau56_sink [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:20,498 INFO || Scheduled shutdown for WorkerConnector{id=vk_nau56_sink} [org.apache.kafka.connect.runtime.WorkerConnector] 2023-09-13 21:55:20,499 INFO || Completed shutdown for WorkerConnector{id=vk_nau56_sink} [org.apache.kafka.connect.runtime.WorkerConnector] 2023-09-13 21:55:20,499 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "DELETE /connectors/vk_nau56_sink HTTP/1.1" 204 0 "-" "ReactorNetty/1.1.6" 6 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,500 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,500 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,501 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=153, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,503 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=153, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,503 INFO || Stopping connector vk_nau56_sink [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:20,503 INFO || Stopping task vk_nau56_sink-0 [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:20,503 WARN || Ignoring stop request for unowned connector vk_nau56_sink [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:20,503 WARN || Ignoring await stop request for non-present connector vk_nau56_sink [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:20,505 INFO || [Worker clientId=connect-1, groupId=naument] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,505 INFO || [Worker clientId=connect-1, groupId=naument] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,505 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 153 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=2996, connectorIds=[vk_nau56_src], taskIds=[vk_nau56_src-0], revokedConnectorIds=[vk_nau56_sink], revokedTaskIds=[vk_nau56_sink-0], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,506 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 2996 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,506 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,506 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,506 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,507 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=154, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,508 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=154, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:20,508 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 154 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=2996, connectorIds=[vk_nau56_src], taskIds=[vk_nau56_src-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,509 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 2996 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,509 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:20,519 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors HTTP/1.1" 200 16 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,521 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,523 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,525 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,527 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,528 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 0 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:20,530 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:20 +0000] "GET /connectors/vk_nau56_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:22,487 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:22 +0000] "GET /connectors/vk_nau56_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:22,487 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:22 +0000] "GET /connectors/vk_nau56_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:22,489 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:22 +0000] "GET /connectors/vk_nau56_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:22,490 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:22 +0000] "GET /connectors/vk_nau56_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:26,026 INFO || Successfully processed removal of connector 'vk_nau56_src' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2023-09-13 21:55:26,026 INFO || [Worker clientId=connect-1, groupId=naument] Connector vk_nau56_src config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:26,027 INFO || [Worker clientId=connect-1, groupId=naument] Handling connector-only config update by stopping connector vk_nau56_src [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:26,027 INFO || Stopping connector vk_nau56_src [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:26,027 INFO || Scheduled shutdown for WorkerConnector{id=vk_nau56_src} [org.apache.kafka.connect.runtime.WorkerConnector] 2023-09-13 21:55:26,027 INFO || Completed shutdown for WorkerConnector{id=vk_nau56_src} [org.apache.kafka.connect.runtime.WorkerConnector] 2023-09-13 21:55:26,027 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:26,027 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:26,028 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:26 +0000] "DELETE /connectors/vk_nau56_src HTTP/1.1" 204 0 "-" "ReactorNetty/1.1.6" 5 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:26,029 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=155, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:26,031 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=155, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:26,031 INFO || Stopping connector vk_nau56_src [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:26,031 INFO || Stopping task vk_nau56_src-0 [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:26,031 WARN || Ignoring stop request for unowned connector vk_nau56_src [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:26,031 WARN || Ignoring await stop request for non-present connector vk_nau56_src [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:26,044 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:26 +0000] "GET /connectors HTTP/1.1" 200 2 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:26,250 INFO Oracle|vk_nau56|snapshot Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:27,886 INFO Oracle|vk_nau56|streaming startScn=290261519981, endScn=290261519989 [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:27,887 INFO Oracle|vk_nau56|streaming Streaming metrics dump: OracleStreamingChangeEventSourceMetrics{currentScn=290261519989, oldestScn=-1, committedScn=290261519981, offsetScn=290261519937, oldestScnChangeTime=null, logMinerQueryCount=305, totalProcessedRows=156290, totalCapturedDmlCount=3, totalDurationOfFetchingQuery=PT2M5.681009S, lastCapturedDmlCount=0, lastDurationOfFetchingQuery=PT0.122377S, maxCapturedDmlCount=1, maxDurationOfFetchingQuery=PT8.735468S, totalBatchProcessingDuration=PT4M43.223617S, lastBatchProcessingDuration=PT0.139754S, maxBatchProcessingThroughput=8, currentLogFileName=[/u02/oradata/naument1/redo_01a.log, /u04/oradata/naument1/redo_01b.log], minLogFilesMined=2, maxLogFilesMined=2, redoLogStatus=[/u02/oradata/naument1/redo_07a.log | ACTIVE, /u04/oradata/naument1/redo_07b.log | ACTIVE, /u02/oradata/naument1/redo_06a.log | ACTIVE, /u04/oradata/naument1/redo_06b.log | ACTIVE, /u02/oradata/naument1/redo_05a.log | ACTIVE, /u04/oradata/naument1/redo_05b.log | ACTIVE, /u02/oradata/naument1/redo_04a.log | ACTIVE, /u04/oradata/naument1/redo_02b.log | ACTIVE, /u04/oradata/naument1/redo_04b.log | ACTIVE, /u02/oradata/naument1/redo_03a.log | ACTIVE, /u04/oradata/naument1/redo_03b.log | ACTIVE, /u02/oradata/naument1/redo_02a.log | ACTIVE, /u02/oradata/naument1/redo_01a.log | CURRENT, /u04/oradata/naument1/redo_01b.log | CURRENT], switchCounter=0, batchSize=20000, millisecondToSleepBetweenMiningQuery=2800, keepTransactionsDuration=PT0S, networkConnectionProblemsCounter0, batchSizeDefault=20000, batchSizeMin=1000, batchSizeMax=100000, sleepTimeDefault=1000, sleepTimeMin=0, sleepTimeMax=3000, sleepTimeIncrement=200, totalParseTime=PT0.000204S, totalStartLogMiningSessionDuration=PT2M7.440667S, lastStartLogMiningSessionDuration=PT0.007598S, maxStartLogMiningSessionDuration=PT13.593406S, totalProcessTime=PT4M43.223617S, minBatchProcessTime=PT0.089409S, maxBatchProcessTime=PT17.808166S, totalResultSetNextTime=PT8.658853S, lagFromTheSource=DurationPT2.061775S, maxLagFromTheSourceDuration=PT27.877737S, minLagFromTheSourceDuration=PT0.139752S, lastCommitDuration=PT0.000001S, maxCommitDuration=PT0.001278S, activeTransactions=0, rolledBackTransactions=722, oversizedTransactions=0, committedTransactions=74955, abandonedTransactionIds={}, rolledbackTransactionIds={08001e0062a73200=08001e0062a73200, 0300160093f92c00=0300160093f92c00, 0500070051043000=0500070051043000, 01000e0011cc2800=01000e0011cc2800, 02000d006d142c00=02000d006d142c00, 0a00140025b83500=0a00140025b83500, 08001300269e3200=08001300269e3200, 020009002c172c00=020009002c172c00, 01000a004bcf2800=01000a004bcf2800, 06001100a2172d00=06001100a2172d00}, registeredDmlCount=3, committedDmlCount=3, errorCount=0, warningCount=0, scnFreezeCount=0, unparsableDdlCount=0, miningSessionUserGlobalAreaMemory=22842328, miningSessionUserGlobalAreaMaxMemory=36522960, miningSessionProcessGlobalAreaMemory=95861576, miningSessionProcessGlobalAreaMaxMemory=107133768} [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:27,887 INFO Oracle|vk_nau56|streaming Offsets: OracleOffsetContext [scn=290261519981, commit_scn=["290261519981:1:05000b0099043000"]] [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:27,887 INFO Oracle|vk_nau56|streaming Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:27,887 INFO Oracle|vk_nau56|streaming Connected metrics set to 'false' [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:27,887 INFO Oracle|vk_nau56|snapshot SignalProcessor stopped [io.debezium.pipeline.signal.SignalProcessor] 2023-09-13 21:55:27,903 INFO Oracle|vk_nau56|snapshot Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2023-09-13 21:55:27,903 INFO Oracle|vk_nau56|snapshot [Producer clientId=vk_nau56-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2023-09-13 21:55:27,904 INFO Oracle|vk_nau56|snapshot Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,904 INFO Oracle|vk_nau56|snapshot Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,904 INFO Oracle|vk_nau56|snapshot Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,904 INFO Oracle|vk_nau56|snapshot App info kafka.producer for vk_nau56-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:27,904 INFO Oracle|vk_nau56|snapshot [Producer clientId=connector-producer-vk_nau56_src-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2023-09-13 21:55:27,906 INFO Oracle|vk_nau56|snapshot Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,906 INFO Oracle|vk_nau56|snapshot Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,906 INFO Oracle|vk_nau56|snapshot Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,906 INFO Oracle|vk_nau56|snapshot App info kafka.producer for connector-producer-vk_nau56_src-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:27,906 INFO || App info kafka.admin.client for connector-adminclient-vk_nau56_src-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:27,907 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,907 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,907 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:27,908 INFO || [Worker clientId=connect-1, groupId=naument] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,909 INFO || [Worker clientId=connect-1, groupId=naument] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,909 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 155 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=2998, connectorIds=[], taskIds=[], revokedConnectorIds=[vk_nau56_src], revokedTaskIds=[vk_nau56_src-0], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,909 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 2998 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,909 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,910 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:27,910 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:27,911 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=156, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:27,912 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=156, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:27,912 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 156 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=2998, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,912 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 2998 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:27,912 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:34,870 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:34 +0000] "GET /connectors HTTP/1.1" 200 2 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:34,874 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.oracle.OracleSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2023-09-13 21:55:38,943 INFO || Database Version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production [io.debezium.connector.oracle.OracleConnection] 2023-09-13 21:55:38,946 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2023-09-13 21:55:38,946 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2023-09-13 21:55:38,949 INFO || [Worker clientId=connect-1, groupId=naument] Connector vk_nau57_src config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,949 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,949 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,950 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:34 +0000] "POST /connectors HTTP/1.1" 201 1491 "-" "ReactorNetty/1.1.6" 4079 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,950 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=157, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,952 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=157, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,952 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 157 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=2999, connectorIds=[vk_nau57_src], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,952 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 2999 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,952 INFO || [Worker clientId=connect-1, groupId=naument] Starting connector vk_nau57_src [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,952 INFO || Creating connector vk_nau57_src of type io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,953 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2023-09-13 21:55:38,953 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,953 INFO || EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] 2023-09-13 21:55:38,953 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,953 INFO || Instantiated connector vk_nau57_src with version 2.4.0.Beta1 of type class io.debezium.connector.oracle.OracleConnector [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,953 INFO || Finished creating connector vk_nau57_src [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,953 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,954 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2023-09-13 21:55:38,954 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1491 "-" "ReactorNetty/1.1.6" 3 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,954 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,954 INFO || EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] 2023-09-13 21:55:38,955 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,957 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 112 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,961 INFO || [Worker clientId=connect-1, groupId=naument] Tasks [vk_nau57_src-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,961 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,961 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,962 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=158, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,964 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=158, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:55:38,964 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 158 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=3001, connectorIds=[vk_nau57_src], taskIds=[vk_nau57_src-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,964 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 3001 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,964 INFO || [Worker clientId=connect-1, groupId=naument] Starting task vk_nau57_src-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,964 INFO || Creating task vk_nau57_src-0 [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,964 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src predicates = [] tasks.max = 1 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2023-09-13 21:55:38,964 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src predicates = [] tasks.max = 1 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,965 INFO || TaskConfig values: task.class = class io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2023-09-13 21:55:38,965 INFO || Instantiated task vk_nau57_src-0 with version 2.4.0.Beta1 of type io.debezium.connector.oracle.OracleConnectorTask [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,965 INFO || AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2023-09-13 21:55:38,966 INFO || KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2023-09-13 21:55:38,966 INFO || KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] specific.avro.key.type = null specific.avro.reader = false specific.avro.value.type = null use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2023-09-13 21:55:38,967 INFO || AvroDataConfig values: allow.optional.map.keys = false connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false generalized.sum.type.support = false ignore.default.for.nullables = false schemas.cache.config = 1000 scrub.invalid.names = false [io.confluent.connect.avro.AvroDataConfig] 2023-09-13 21:55:38,967 INFO || AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2023-09-13 21:55:38,967 INFO || KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2023-09-13 21:55:38,967 INFO || KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] specific.avro.key.type = null specific.avro.reader = false specific.avro.value.type = null use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2023-09-13 21:55:38,968 INFO || AvroDataConfig values: allow.optional.map.keys = false connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false generalized.sum.type.support = false ignore.default.for.nullables = false schemas.cache.config = 1000 scrub.invalid.names = false [io.confluent.connect.avro.AvroDataConfig] 2023-09-13 21:55:38,968 INFO || Set up the key converter class io.confluent.connect.avro.AvroConverter for task vk_nau57_src-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,968 INFO || Set up the value converter class io.confluent.connect.avro.AvroConverter for task vk_nau57_src-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,968 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task vk_nau57_src-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,968 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2023-09-13 21:55:38,968 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,968 INFO || EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig] 2023-09-13 21:55:38,968 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.oracle.OracleConnector errors.log.enable = true errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_src offsets.storage.topic = null predicates = [] tasks.max = 1 topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 1 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:55:38,968 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:55:38,968 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [broker1:29092, broker2:29092, broker3:29092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-vk_nau57_src-0 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2023-09-13 21:55:38,970 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2023-09-13 21:55:38,970 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,970 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,970 INFO || Kafka startTimeMs: 1694631338970 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,971 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [broker1:29092, broker2:29092, broker3:29092] client.dns.lookup = use_all_dns_ips client.id = connector-adminclient-vk_nau57_src-0 connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2023-09-13 21:55:38,972 INFO || These configurations '[group.id, max.partition.fetch.bytes, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, message.max.bytes, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, max.request.size, replica.fetch.max.bytes, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2023-09-13 21:55:38,973 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,973 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,973 INFO || Kafka startTimeMs: 1694631338972 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:38,972 INFO || [Producer clientId=connector-producer-vk_nau57_src-0] Cluster ID: gVJjK6cZTd-nXsXP2EIHEQ [org.apache.kafka.clients.Metadata] 2023-09-13 21:55:38,974 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:55:38,974 INFO || Starting OracleConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || connector.class = io.debezium.connector.oracle.OracleConnector [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || topic.creation.default.partitions = 1 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || tasks.max = 1 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || schema.history.internal.store.only.captured.tables.ddl = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || schema.history.internal.store.only.captured.databases.ddl = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,974 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,974 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.prefix = vk_nau57 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || decimal.handling.mode = precise [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || schema.history.internal.kafka.topic = vk_nau57_src.schema-changes [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.creation.default.include = vk_nau57\.* [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.creation.default.replication.factor = 1 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || lob.enabled = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || value.converter = io.confluent.connect.avro.AvroConverter [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || errors.log.enable = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || key.converter = io.confluent.connect.avro.AvroConverter [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || snapshot.lock.timeout.ms = 5000 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.user = debezium [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.dbname = NAUMENT1 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || datatype.propagate.source.type = .* [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.creation.default.compression.type = lz4 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.connection.adapter = logminer [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || schema.history.internal.kafka.bootstrap.servers = broker1:29092,broker3:29092,broker3:29092 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.creation.default.retention.ms = 432000000 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.port = 1521 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || topic.creation.enable = true [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || value.converter.schema.registry.url = http://naument-sr:8081 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || log.mining.session.max.ms = 120000 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || task.class = io.debezium.connector.oracle.OracleConnectorTask [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.hostname = naumen-db-test.rgs.ru [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || name = vk_nau57_src [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || table.include.list = DEBEZIUM.GBC_TBL_SERVICECALL_NC57 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || key.converter.schema.registry.url = http://naument-sr:8081 [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || snapshot.mode = always [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:38,975 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.oracle.OracleSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2023-09-13 21:55:38,976 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.SchemaTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2023-09-13 21:55:38,976 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 3 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,979 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 112 "-" "ReactorNetty/1.1.6" 3 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:38,980 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:38 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 404 70 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:39,027 INFO || Database Version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production [io.debezium.connector.oracle.OracleConnection] 2023-09-13 21:55:39,029 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=vk_nau57-schemahistory, bootstrap.servers=broker1:29092,broker3:29092,broker3:29092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=vk_nau57-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2023-09-13 21:55:39,029 INFO || KafkaSchemaHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=broker1:29092,broker3:29092,broker3:29092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=vk_nau57-schemahistory, linger.ms=0} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2023-09-13 21:55:39,029 INFO || Requested thread factory for connector OracleConnector, id = vk_nau57 named = db-history-config-check [io.debezium.util.Threads] 2023-09-13 21:55:39,030 INFO || Idempotence will be disabled because acks is set to 1, not set to 'all'. [org.apache.kafka.clients.producer.ProducerConfig] 2023-09-13 21:55:39,030 INFO || ProducerConfig values: acks = 1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [broker1:29092, broker3:29092, broker3:29092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = vk_nau57-schemahistory compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 10000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2023-09-13 21:55:39,031 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,031 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,031 INFO || Kafka startTimeMs: 1694631339031 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,033 INFO || [Producer clientId=vk_nau57-schemahistory] Cluster ID: gVJjK6cZTd-nXsXP2EIHEQ [org.apache.kafka.clients.Metadata] 2023-09-13 21:55:39,034 INFO || No previous offsets found [io.debezium.connector.common.BaseSourceTask] 2023-09-13 21:55:39,034 INFO || Connector started for the first time, database schema history recovery will not be executed [io.debezium.connector.oracle.OracleConnectorTask] 2023-09-13 21:55:39,035 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [broker1:29092, broker3:29092, broker3:29092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = vk_nau57-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = vk_nau57-schemahistory group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2023-09-13 21:55:39,036 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,036 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,036 INFO || Kafka startTimeMs: 1694631339036 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,037 INFO || [Consumer clientId=vk_nau57-schemahistory, groupId=vk_nau57-schemahistory] Cluster ID: gVJjK6cZTd-nXsXP2EIHEQ [org.apache.kafka.clients.Metadata] 2023-09-13 21:55:39,039 INFO || [Consumer clientId=vk_nau57-schemahistory, groupId=vk_nau57-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:55:39,039 INFO || [Consumer clientId=vk_nau57-schemahistory, groupId=vk_nau57-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:55:39,039 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,039 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,039 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,040 INFO || App info kafka.consumer for vk_nau57-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,040 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.servers = [broker1:29092, broker3:29092, broker3:29092] client.dns.lookup = use_all_dns_ips client.id = vk_nau57-schemahistory connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 1 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2023-09-13 21:55:39,041 INFO || These configurations '[value.serializer, acks, batch.size, max.block.ms, buffer.memory, key.serializer, linger.ms]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2023-09-13 21:55:39,041 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,041 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,041 INFO || Kafka startTimeMs: 1694631339041 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,073 INFO || Database schema history topic '(name=vk_nau57_src.schema-changes, numPartitions=1, replicationFactor=default, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2023-09-13 21:55:39,074 INFO || App info kafka.admin.client for vk_nau57-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:55:39,074 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,074 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,075 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:55:39,075 INFO || Requested thread factory for connector OracleConnector, id = vk_nau57 named = SignalProcessor [io.debezium.util.Threads] 2023-09-13 21:55:39,076 INFO || Requested thread factory for connector OracleConnector, id = vk_nau57 named = change-event-source-coordinator [io.debezium.util.Threads] 2023-09-13 21:55:39,076 INFO || Requested thread factory for connector OracleConnector, id = vk_nau57 named = blocking-snapshot [io.debezium.util.Threads] 2023-09-13 21:55:39,076 INFO || Creating thread debezium-oracleconnector-vk_nau57-change-event-source-coordinator [io.debezium.util.Threads] 2023-09-13 21:55:39,076 INFO Oracle|vk_nau57|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:39,076 INFO Oracle|vk_nau57|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:39,076 INFO || SignalProcessor started. Scheduling it every 5000ms [io.debezium.pipeline.signal.SignalProcessor] 2023-09-13 21:55:39,076 INFO Oracle|vk_nau57|snapshot Snapshot mode is set to ALWAYS, not checking exiting offset. [io.debezium.connector.oracle.OracleSnapshotChangeEventSource] 2023-09-13 21:55:39,076 INFO || Creating thread debezium-oracleconnector-vk_nau57-SignalProcessor [io.debezium.util.Threads] 2023-09-13 21:55:39,076 INFO Oracle|vk_nau57|snapshot According to the connector configuration both schema and data will be snapshot. [io.debezium.connector.oracle.OracleSnapshotChangeEventSource] 2023-09-13 21:55:39,076 INFO Oracle|vk_nau57|snapshot Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:39,077 INFO Oracle|vk_nau57|snapshot Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:39,077 INFO Oracle|vk_nau57|snapshot WorkerSourceTask{id=vk_nau57_src-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:40,171 INFO Oracle|vk_nau57|snapshot Adding table NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:40,175 INFO Oracle|vk_nau57|snapshot Created connection pool with 1 threads [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:40,175 INFO Oracle|vk_nau57|snapshot Snapshot step 3 - Locking captured tables [NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:40,189 INFO Oracle|vk_nau57|snapshot Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,297 INFO Oracle|vk_nau57|snapshot No in-progress transactions will be captured. [io.debezium.connector.oracle.logminer.LogMinerAdapter] 2023-09-13 21:55:41,299 INFO Oracle|vk_nau57|snapshot Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2023-09-13 21:55:41,299 INFO Oracle|vk_nau57|snapshot Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,299 INFO Oracle|vk_nau57|snapshot Only captured tables schema should be captured, capturing: [NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57] [io.debezium.connector.oracle.OracleSnapshotChangeEventSource] 2023-09-13 21:55:41,345 INFO Oracle|vk_nau57|snapshot Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,345 INFO Oracle|vk_nau57|snapshot Capturing structure of table NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,491 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors HTTP/1.1" 200 16 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,494 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,495 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,497 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,499 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,500 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 0 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,502 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:41 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 30 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:41,732 INFO Oracle|vk_nau57|snapshot Already applied 1 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2023-09-13 21:55:41,733 INFO Oracle|vk_nau57|snapshot Snapshot step 7 - Snapshotting data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,733 INFO Oracle|vk_nau57|snapshot Creating snapshot worker pool with 1 worker thread(s) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,733 INFO Oracle|vk_nau57|snapshot For table 'NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' using select statement: 'SELECT "ID", "CREATION_DATE", "CLAIM_TRANSFERDATE", "TITLE", "CLIENT_EMAIL", "FLOAT_ATTR_1", "FLOAT_ATTR_2" FROM "DEBEZIUM"."GBC_TBL_SERVICECALL_NC57" AS OF SCN 290261520203' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,734 INFO Oracle|vk_nau57|snapshot Exporting data from table 'NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' (1 of 1 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,741 INFO Oracle|vk_nau57|snapshot Finished exporting 3 records for table 'NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' (1 of 1 tables); total duration '00:00:00.007' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2023-09-13 21:55:41,743 INFO Oracle|vk_nau57|snapshot Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2023-09-13 21:55:41,743 INFO Oracle|vk_nau57|snapshot Snapshot completed [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2023-09-13 21:55:41,743 INFO Oracle|vk_nau57|snapshot Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=290261520203, commit_scn=[]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:41,743 INFO Oracle|vk_nau57|streaming Connected metrics set to 'true' [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:41,743 INFO Oracle|vk_nau57|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2023-09-13 21:55:41,874 WARN Oracle|vk_nau57|streaming Database table 'NAUMENT1.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' not configured with supplemental logging "(ALL) COLUMNS"; only explicitly changed columns will be captured. Use: ALTER TABLE DEBEZIUM.GBC_TBL_SERVICECALL_NC57 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Redo Log Group Sizes: [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #1: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #2: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #3: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #4: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #5: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #6: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:41,875 INFO Oracle|vk_nau57|streaming Group #7: 536870912 bytes [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource] 2023-09-13 21:55:42,104 INFO Oracle|vk_nau57|snapshot The task will send records to topic 'vk_nau57' for the first time. Checking whether topic exists [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,106 INFO Oracle|vk_nau57|snapshot Creating topic 'vk_nau57' [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,135 INFO Oracle|vk_nau57|snapshot Created topic (name=vk_nau57, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={compression.type=lz4, retention.ms=432000000}) on brokers at broker1:29092,broker2:29092,broker3:29092 [org.apache.kafka.connect.util.TopicAdmin] 2023-09-13 21:55:42,135 INFO Oracle|vk_nau57|snapshot Created topic '(name=vk_nau57, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={compression.type=lz4, retention.ms=432000000})' using creation group TopicCreationGroup{name='default', inclusionPattern=.*, exclusionPattern=, numPartitions=1, replicationFactor=1, otherConfigs={compression.type=lz4, retention.ms=432000000}} [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,137 WARN || [Producer clientId=connector-producer-vk_nau57_src-0] Error while fetching metadata with correlation id 3 : {vk_nau57=UNKNOWN_TOPIC_OR_PARTITION} [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:55:42,238 WARN || [Producer clientId=connector-producer-vk_nau57_src-0] Error while fetching metadata with correlation id 4 : {vk_nau57=UNKNOWN_TOPIC_OR_PARTITION} [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:55:42,339 WARN || [Producer clientId=connector-producer-vk_nau57_src-0] Error while fetching metadata with correlation id 5 : {vk_nau57=UNKNOWN_TOPIC_OR_PARTITION} [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:55:42,440 WARN || [Producer clientId=connector-producer-vk_nau57_src-0] Error while fetching metadata with correlation id 6 : {vk_nau57=UNKNOWN_TOPIC_OR_PARTITION} [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:55:42,554 INFO Oracle|vk_nau57|snapshot The task will send records to topic 'vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' for the first time. Checking whether topic exists [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,555 INFO Oracle|vk_nau57|snapshot Creating topic 'vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,583 INFO Oracle|vk_nau57|snapshot Created topic (name=vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={compression.type=lz4, retention.ms=432000000}) on brokers at broker1:29092,broker2:29092,broker3:29092 [org.apache.kafka.connect.util.TopicAdmin] 2023-09-13 21:55:42,583 INFO Oracle|vk_nau57|snapshot Created topic '(name=vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={compression.type=lz4, retention.ms=432000000})' using creation group TopicCreationGroup{name='default', inclusionPattern=.*, exclusionPattern=, numPartitions=1, replicationFactor=1, otherConfigs={compression.type=lz4, retention.ms=432000000}} [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2023-09-13 21:55:42,584 WARN || [Producer clientId=connector-producer-vk_nau57_src-0] Error while fetching metadata with correlation id 10 : {vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57=UNKNOWN_TOPIC_OR_PARTITION} [org.apache.kafka.clients.NetworkClient] 2023-09-13 21:55:42,685 INFO || [Producer clientId=connector-producer-vk_nau57_src-0] Resetting the last seen epoch of partition vk_nau57-0 to 0 since the associated topicId changed from null to YzfopH4uRSSO-EGm7T6imw [org.apache.kafka.clients.Metadata] 2023-09-13 21:55:44,393 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors HTTP/1.1" 200 16 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,396 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,397 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,399 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,401 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,402 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 0 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:55:44,404 INFO || 10.0.2.5 - - [13/Sep/2023:18:55:44 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,051 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors HTTP/1.1" 200 16 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,055 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2023-09-13 21:56:00,058 INFO || [Worker clientId=connect-1, groupId=naument] Connector vk_nau57_sink config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,058 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,058 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,059 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "POST /connectors HTTP/1.1" 201 868 "-" "ReactorNetty/1.1.6" 7 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,060 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=159, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,063 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=159, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,063 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 159 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=3002, connectorIds=[vk_nau57_sink, vk_nau57_src], taskIds=[vk_nau57_src-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,063 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 3002 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,063 INFO || [Worker clientId=connect-1, groupId=naument] Starting connector vk_nau57_sink [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,063 INFO || Creating connector vk_nau57_sink of type io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,063 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2023-09-13 21:56:00,063 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:56:00,064 INFO || Instantiated connector vk_nau57_sink with version 2.4.0.Beta1 of type class io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,064 INFO || Finished creating connector vk_nau57_sink [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,064 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,064 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2023-09-13 21:56:00,064 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:56:00,065 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 868 "-" "ReactorNetty/1.1.6" 5 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,068 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 404 74 "-" "ReactorNetty/1.1.6" 3 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,072 INFO || [Worker clientId=connect-1, groupId=naument] Tasks [vk_nau57_sink-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,073 INFO || [Worker clientId=connect-1, groupId=naument] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,073 INFO || [Worker clientId=connect-1, groupId=naument] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,074 INFO || [Worker clientId=connect-1, groupId=naument] Successfully joined group with generation Generation{generationId=160, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,075 INFO || [Worker clientId=connect-1, groupId=naument] Successfully synced group in generation Generation{generationId=160, memberId='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2023-09-13 21:56:00,075 INFO || [Worker clientId=connect-1, groupId=naument] Joined group at generation 160 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-e6c3877a-a74d-4220-9079-58bada7b10b7', leaderUrl='http://172.18.0.6:8083/', offset=3004, connectorIds=[vk_nau57_sink, vk_nau57_src], taskIds=[vk_nau57_sink-0, vk_nau57_src-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,075 INFO || [Worker clientId=connect-1, groupId=naument] Starting connectors and tasks using config offset 3004 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,075 INFO || [Worker clientId=connect-1, groupId=naument] Starting task vk_nau57_sink-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,075 INFO || Creating task vk_nau57_sink-0 [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,076 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2023-09-13 21:56:00,076 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:56:00,076 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2023-09-13 21:56:00,076 INFO || Instantiated task vk_nau57_sink-0 with version 2.4.0.Beta1 of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,076 INFO || AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2023-09-13 21:56:00,077 INFO || KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2023-09-13 21:56:00,077 INFO || KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] specific.avro.key.type = null specific.avro.reader = false specific.avro.value.type = null use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2023-09-13 21:56:00,077 INFO || AvroDataConfig values: allow.optional.map.keys = false connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false generalized.sum.type.support = false ignore.default.for.nullables = false schemas.cache.config = 1000 scrub.invalid.names = false [io.confluent.connect.avro.AvroDataConfig] 2023-09-13 21:56:00,078 INFO || AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.connect.avro.AvroConverterConfig] 2023-09-13 21:56:00,078 INFO || KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroSerializerConfig] 2023-09-13 21:56:00,078 INFO || KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.cache.expiry.buffer.seconds = 300 bearer.auth.client.id = null bearer.auth.client.secret = null bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.custom.provider.class = null bearer.auth.identity.pool.id = null bearer.auth.issuer.endpoint.url = null bearer.auth.logical.cluster = null bearer.auth.scope = null bearer.auth.scope.claim.name = scope bearer.auth.sub.claim.name = sub bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy http.connect.timeout.ms = 60000 http.read.timeout.ms = 60000 id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.cache.size = 1000 latest.cache.ttl.sec = -1 latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 rule.actions = [] rule.executors = [] rule.service.loader.enable = true schema.format = null schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://naument-sr:8081] specific.avro.key.type = null specific.avro.reader = false specific.avro.value.type = null use.latest.version = false use.latest.with.metadata = null use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] 2023-09-13 21:56:00,078 INFO || AvroDataConfig values: allow.optional.map.keys = false connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false generalized.sum.type.support = false ignore.default.for.nullables = false schemas.cache.config = 1000 scrub.invalid.names = false [io.confluent.connect.avro.AvroDataConfig] 2023-09-13 21:56:00,078 INFO || Set up the key converter class io.confluent.connect.avro.AvroConverter for task vk_nau57_sink-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,078 INFO || Set up the value converter class io.confluent.connect.avro.AvroConverter for task vk_nau57_sink-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,079 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task vk_nau57_sink-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,079 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2023-09-13 21:56:00,079 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2023-09-13 21:56:00,079 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = vk_nau57_sink predicates = [] tasks.max = 1 topics = [] topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 transforms = [] value.converter = class io.confluent.connect.avro.AvroConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2023-09-13 21:56:00,079 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [broker1:29092, broker2:29092, broker3:29092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-vk_nau57_sink-0 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-vk_nau57_sink group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2023-09-13 21:56:00,082 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2023-09-13 21:56:00,082 INFO || Kafka version: 3.5.1 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:56:00,082 INFO || Kafka commitId: 2c6fb6c54472e90a [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:56:00,082 INFO || Kafka startTimeMs: 1694631360082 [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:56:00,083 INFO || [Worker clientId=connect-1, groupId=naument] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2023-09-13 21:56:00,083 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Subscribed to pattern: 'vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57' [org.apache.kafka.clients.consumer.KafkaConsumer] 2023-09-13 21:56:00,085 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || table.name.format = vk_nau57_tbl_servicecall [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || tasks.max = 1 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || connection.username = debeziumt [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || quote.identifiers = false [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || topics.regex = vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || value.converter.schema.registry.url = http://naument-sr:8081 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || name = vk_nau57_sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || connection.url = jdbc:postgresql://dwh-db-test.rgs.ru:5438/db_ods_test?currentSchema=naument1 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || value.converter = io.confluent.connect.avro.AvroConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || key.converter.schema.registry.url = http://naument-sr:8081 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || key.converter = io.confluent.connect.avro.AvroConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2023-09-13 21:56:00,085 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 4 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,086 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 4 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,089 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 404 71 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,089 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:00 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 111 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:00,092 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2023-09-13 21:56:00,092 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:postgresql://dwh-db-test.rgs.ru:5438/db_ods_test?currentSchema=naument1 [org.hibernate.orm.connections.pooling.c3p0] 2023-09-13 21:56:00,092 INFO || HHH10001001: Connection properties: {password=****, user=debeziumt} [org.hibernate.orm.connections.pooling.c3p0] 2023-09-13 21:56:00,092 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2023-09-13 21:56:00,092 WARN || HHH10001006: No JDBC Driver class was specified by property hibernate.connection.driver_class [org.hibernate.orm.connections.pooling.c3p0] 2023-09-13 21:56:00,115 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2023-09-13 21:56:00,116 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@984d6114 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@b282d6d9 [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2rvy88ayj9hfi81jom5bf|7497420, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@9fbba7fb [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2rvy88ayj9hfi81jom5bf|1c7e951, jdbcUrl -> jdbc:postgresql://dwh-db-test.rgs.ru:5438/db_ods_test?currentSchema=naument1, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2rvy88ayj9hfi81jom5bf|1fae148, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2023-09-13 21:56:00,137 INFO || HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect [SQL dialect] 2023-09-13 21:56:00,149 INFO || HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2023-09-13 21:56:00,150 INFO || Using dialect io.debezium.connector.jdbc.dialect.postgres.PostgresDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2023-09-13 21:56:00,153 INFO || Database TimeZone: Europe/Moscow [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2023-09-13 21:56:00,153 INFO || Database version 13.2.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2023-09-13 21:56:00,153 INFO || WorkerSinkTask{id=vk_nau57_sink-0} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2023-09-13 21:56:00,154 INFO || WorkerSinkTask{id=vk_nau57_sink-0} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2023-09-13 21:56:00,157 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Cluster ID: gVJjK6cZTd-nXsXP2EIHEQ [org.apache.kafka.clients.Metadata] 2023-09-13 21:56:00,157 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Discovered group coordinator broker1:29092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,157 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,159 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Request joining group due to: need to re-join with the given member-id: connector-consumer-vk_nau57_sink-0-7d7e5fa1-4376-43a2-b46c-692b1fb52cb6 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,161 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,161 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,162 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Successfully joined group with generation Generation{generationId=1, memberId='connector-consumer-vk_nau57_sink-0-7d7e5fa1-4376-43a2-b46c-692b1fb52cb6', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,162 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Finished assignment for group at generation 1: {connector-consumer-vk_nau57_sink-0-7d7e5fa1-4376-43a2-b46c-692b1fb52cb6=Assignment(partitions=[vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,164 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Successfully synced group in generation Generation{generationId=1, memberId='connector-consumer-vk_nau57_sink-0-7d7e5fa1-4376-43a2-b46c-692b1fb52cb6', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,164 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Notifying assignor about the new Assignment(partitions=[vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,164 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Adding newly assigned partitions: vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,164 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Found no committed offset for partition vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:56:00,165 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Resetting offset for partition vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[broker2:29092 (id: 2 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] Hibernate: CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID)) 2023-09-13 21:56:00,182 WARN || SQL Error: 0, SQLState: 42701 [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2023-09-13 21:56:00,182 ERROR || ERROR: column "float_attr_1" specified more than once [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2023-09-13 21:56:00,183 ERROR || Failed to process record: Failed to process a sink record [io.debezium.connector.jdbc.JdbcSinkConnectorTask] org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:82) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:93) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:587) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:336) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654) at io.debezium.connector.jdbc.JdbcChangeEventSink.createTable(JdbcChangeEventSink.java:156) at io.debezium.connector.jdbc.JdbcChangeEventSink.checkAndApplyTableChangesIfNeeded(JdbcChangeEventSink.java:109) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:78) ... 13 more Caused by: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:89) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:97) at org.hibernate.query.sql.internal.NativeNonSelectQueryPlanImpl.executeUpdate(NativeNonSelectQueryPlanImpl.java:78) at org.hibernate.query.sql.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:820) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:643) ... 16 more Caused by: org.postgresql.util.PSQLException: ERROR: column "float_attr_1" specified more than once at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:84) ... 19 more 2023-09-13 21:56:02,131 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,134 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,134 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,135 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 167 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,136 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,137 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,139 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,139 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,141 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,141 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,142 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,142 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:02,144 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:02 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,661 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,664 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,664 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,666 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,667 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 167 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,669 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,669 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,671 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,672 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,673 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,674 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,675 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:27,676 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:27 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,173 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,176 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,176 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,178 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 167 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,179 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,180 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,181 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,182 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,183 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,184 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,185 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,186 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:31,186 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:31 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,181 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,184 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,184 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,185 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,185 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 167 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,188 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,188 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,190 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,190 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,191 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 0 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,192 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,193 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:32,194 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:32 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,412 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,415 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,415 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,417 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,417 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 167 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,420 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,421 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,423 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,424 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 3 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,426 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,426 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,428 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:33,428 INFO || 10.0.2.5 - - [13/Sep/2023:18:56:33 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:56:38,974 INFO || WorkerSourceTask{id=vk_nau57_src-0} Committing offsets for 4 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask] 2023-09-13 21:57:00,183 ERROR || WorkerSinkTask{id=vk_nau57_sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: JDBC sink connector failure [org.apache.kafka.connect.runtime.WorkerSinkTask] org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:83) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:587) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:336) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:82) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:93) ... 12 more Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654) at io.debezium.connector.jdbc.JdbcChangeEventSink.createTable(JdbcChangeEventSink.java:156) at io.debezium.connector.jdbc.JdbcChangeEventSink.checkAndApplyTableChangesIfNeeded(JdbcChangeEventSink.java:109) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:78) ... 13 more Caused by: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:89) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:97) at org.hibernate.query.sql.internal.NativeNonSelectQueryPlanImpl.executeUpdate(NativeNonSelectQueryPlanImpl.java:78) at org.hibernate.query.sql.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:820) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:643) ... 16 more Caused by: org.postgresql.util.PSQLException: ERROR: column "float_attr_1" specified more than once at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:84) ... 19 more 2023-09-13 21:57:00,184 ERROR || WorkerSinkTask{id=vk_nau57_sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask] org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception. at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:336) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:83) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:587) ... 11 more Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:82) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:93) ... 12 more Caused by: jakarta.persistence.PersistenceException: Converting `org.hibernate.exception.SQLGrammarException` to JPA `PersistenceException` : JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:165) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:175) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:654) at io.debezium.connector.jdbc.JdbcChangeEventSink.createTable(JdbcChangeEventSink.java:156) at io.debezium.connector.jdbc.JdbcChangeEventSink.checkAndApplyTableChangesIfNeeded(JdbcChangeEventSink.java:109) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:78) ... 13 more Caused by: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [CREATE TABLE vk_nau57_tbl_servicecall (ID decimal(19,0) NOT NULL, CREATION_DATE timestamp(6) NOT NULL, CLAIM_TRANSFERDATE timestamp(6) NULL, TITLE varchar(4000) NULL, CLIENT_EMAIL varchar(255) NULL, FLOAT_ATTR_1 double precision NULL, FLOAT_ATTR_1 double precision NULL, PRIMARY KEY(ID))] at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:89) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:109) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:95) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:97) at org.hibernate.query.sql.internal.NativeNonSelectQueryPlanImpl.executeUpdate(NativeNonSelectQueryPlanImpl.java:78) at org.hibernate.query.sql.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:820) at org.hibernate.query.spi.AbstractQuery.executeUpdate(AbstractQuery.java:643) ... 16 more Caused by: org.postgresql.util.PSQLException: ERROR: column "float_attr_1" specified more than once at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415) at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190) at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:1502) at org.hibernate.sql.exec.internal.StandardJdbcMutationExecutor.execute(StandardJdbcMutationExecutor.java:84) ... 19 more 2023-09-13 21:57:00,184 INFO || Closing session. [io.debezium.connector.jdbc.JdbcChangeEventSink] 2023-09-13 21:57:00,184 INFO || Closing the session factory [io.debezium.connector.jdbc.JdbcChangeEventSink] 2023-09-13 21:57:00,186 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Revoke previously assigned partitions vk_nau57.DEBEZIUM.GBC_TBL_SERVICECALL_NC57-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:57:00,186 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Member connector-consumer-vk_nau57_sink-0-7d7e5fa1-4376-43a2-b46c-692b1fb52cb6 sending LeaveGroup request to coordinator broker1:29092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:57:00,186 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:57:00,187 INFO || [Consumer clientId=connector-consumer-vk_nau57_sink-0, groupId=connect-vk_nau57_sink] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2023-09-13 21:57:00,246 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:57:00,246 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:57:00,246 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2023-09-13 21:57:00,247 INFO || App info kafka.consumer for connector-consumer-vk_nau57_sink-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2023-09-13 21:57:08,758 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors HTTP/1.1" 200 32 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,762 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,762 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src HTTP/1.1" 200 1528 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,763 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src/status HTTP/1.1" 200 168 "-" "ReactorNetty/1.1.6" 0 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,765 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 5072 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,766 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src/config HTTP/1.1" 200 1431 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,767 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink/config HTTP/1.1" 200 809 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,768 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src/tasks HTTP/1.1" 200 1551 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,769 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,770 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src/tasks/0/status HTTP/1.1" 200 56 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,771 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 4961 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,772 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_src/topics HTTP/1.1" 200 85 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:08,773 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:08 +0000] "GET /connectors/vk_nau57_sink/topics HTTP/1.1" 200 75 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:10,307 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:10 +0000] "GET /connectors/vk_nau57_sink HTTP/1.1" 200 906 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:10,307 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:10 +0000] "GET /connectors/vk_nau57_sink/tasks HTTP/1.1" 200 930 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:10,309 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:10 +0000] "GET /connectors/vk_nau57_sink/tasks/0/status HTTP/1.1" 200 4961 "-" "ReactorNetty/1.1.6" 1 [org.apache.kafka.connect.runtime.rest.RestServer] 2023-09-13 21:57:10,310 INFO || 10.0.2.3 - - [13/Sep/2023:18:57:10 +0000] "GET /connectors/vk_nau57_sink/status HTTP/1.1" 200 5072 "-" "ReactorNetty/1.1.6" 2 [org.apache.kafka.connect.runtime.rest.RestServer]