[2025-10-07 11:38:20,807] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,307] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,308] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:21,308] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,808] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:21,809] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:21,809] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:21,809] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,309] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,310] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:22,310] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:22,810] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:22,811] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:22,811] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,311] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,312] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:23,312] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,646] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors HTTP/1.1" 200 197 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,648] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests HTTP/1.1" 200 3164 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,649] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests/config HTTP/1.1" 200 3043 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,649] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests/tasks HTTP/1.1" 200 3179 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,649] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests/topics HTTP/1.1" 200 98 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,650] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s/topics HTTP/1.1" 200 107 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,650] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s HTTP/1.1" 200 3261 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,651] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s/tasks HTTP/1.1" 200 3275 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,650] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s/config HTTP/1.1" 200 3138 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,651] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s HTTP/1.1" 200 3261 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,653] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s/config HTTP/1.1" 200 3138 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds HTTP/1.1" 200 3261 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests/config HTTP/1.1" 200 3135 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s/topics HTTP/1.1" 200 107 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds/config HTTP/1.1" 200 3138 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s/tasks HTTP/1.1" 200 3275 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,655] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds/topics HTTP/1.1" 200 107 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,656] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests HTTP/1.1" 200 3256 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,654] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds/tasks HTTP/1.1" 200 3275 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,655] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests/topics HTTP/1.1" 200 106 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,657] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s/topics HTTP/1.1" 200 100 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,657] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests/tasks HTTP/1.1" 200 3271 "-" "ReactorNetty/1.1.10" 4 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,657] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s/tasks HTTP/1.1" 200 3183 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,658] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat HTTP/1.1" 200 3354 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,658] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3227 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,658] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat/topics HTTP/1.1" 200 150 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,659] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests/status HTTP/1.1" 200 182 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,659] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks HTTP/1.1" 200 3366 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,659] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,660] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s/status HTTP/1.1" 200 183 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,660] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s/config HTTP/1.1" 200 3046 "-" "ReactorNetty/1.1.10" 4 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,660] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request1s/status HTTP/1.1" 200 183 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,661] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_request3s/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,661] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,662] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_requests/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,662] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_sumd_card_funds/status HTTP/1.1" 200 183 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,662] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s HTTP/1.1" 200 3169 "-" "ReactorNetty/1.1.10" 6 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,663] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests/status HTTP/1.1" 200 182 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,663] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,663] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat/status HTTP/1.1" 200 187 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,664] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_20_trans_requests/tasks/0/status HTTP/1.1" 200 57 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,665] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks/0/status HTTP/1.1" 200 58 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,666] INFO 10.11.57.201 - - [07/Oct/2025:06:38:23 +0000] "GET /connectors/SI_source_trans_request1s/status HTTP/1.1" 200 183 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,812] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:23,813] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:23,813] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:23,813] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,313] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,314] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,314] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:24,314] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,814] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,815] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:24,815] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:24,815] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:24,878] INFO 10.11.57.201 - - [07/Oct/2025:06:38:24 +0000] "GET /connectors/source_cdc_signal_heartbeat HTTP/1.1" 200 3354 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:24,878] INFO 10.11.57.201 - - [07/Oct/2025:06:38:24 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks HTTP/1.1" 200 3366 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:24,879] INFO 10.11.57.201 - - [07/Oct/2025:06:38:24 +0000] "GET /connectors/source_cdc_signal_heartbeat/status HTTP/1.1" 200 187 "-" "ReactorNetty/1.1.10" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:24,880] INFO 10.11.57.201 - - [07/Oct/2025:06:38:24 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks/0/status HTTP/1.1" 200 58 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,315] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,316] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,316] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:25,316] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,816] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:25,817] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:25,817] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:25,989] INFO 10.11.57.201 - - [07/Oct/2025:06:38:25 +0000] "GET /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3227 "-" "ReactorNetty/1.1.10" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,317] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,318] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,318] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:26,318] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:26,818] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:26,819] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:26,819] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,319] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,320] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:27,320] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,820] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:27,821] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:27,821] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:27,821] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,321] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,322] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,322] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:28,322] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,822] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,822] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,822] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,822] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:28,822] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:28,823] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:28,823] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,323] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,324] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:29,324] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:29,824] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:29,825] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:29,825] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,325] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,326] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:30,326] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:30,826] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:30,827] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:30,827] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,327] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,328] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:31,328] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,828] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:31,829] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:31,829] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:31,829] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,329] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,330] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,330] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:32,330] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:32,830] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:32,831] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:32,831] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,331] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,332] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,332] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:33,332] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:33,832] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:33,833] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:33,833] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,333] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,334] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,334] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:34,334] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,834] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:34,835] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:34,835] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:34,835] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,335] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,336] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,336] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:35,336] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:35,836] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:35,837] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:35,837] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,337] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,338] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:36,338] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:36,838] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:36,839] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:36,839] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,339] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,340] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:37,340] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,840] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:37,841] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:37,841] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:37,841] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,341] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,342] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,342] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:38,342] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,842] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:38,843] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:38,843] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:38,843] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,343] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,344] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,344] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:39,344] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,844] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:39,845] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:39,845] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:39,845] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,345] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,346] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,346] DEBUG [source_cdc_signal_heartbeat|task-0] polling records... (io.debezium.connector.base.ChangeEventQueue:260) [2025-10-07 11:38:40,346] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,392] INFO Loading the custom source info struct maker plugin: io.debezium.connector.informix.InformixSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1929) [2025-10-07 11:38:40,414] DEBUG Connected to jdbc:informix-sqli://10.11.56.182:9260/cards_1952:user=kafka;password=Lahore@556677 with {server.name=inst_kafka_net_41, connection.retries=5, connection.retry.interval.ms=1000} (io.debezium.jdbc.JdbcConnection:256) [2025-10-07 11:38:40,418] INFO Successfully tested connection for jdbc:informix-sqli://10.11.56.182:9260/cards_1952:user=kafka;password=Lahore@556677 with user 'kafka' (io.debezium.connector.informix.InformixConnector:82) [2025-10-07 11:38:40,419] INFO Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-10-07 11:38:40,419] INFO Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-10-07 11:38:40,420] INFO Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-10-07 11:38:40,423] INFO AbstractConfig values: (org.apache.kafka.common.config.AbstractConfig:372) [2025-10-07 11:38:40,432] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Connector source_cdc_signal_heartbeat config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2425) [2025-10-07 11:38:40,434] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Handling connector-only config update by restarting connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.distributed.DistributedHerder:716) [2025-10-07 11:38:40,435] INFO [source_cdc_signal_heartbeat|worker] Stopping connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.Worker:451) [2025-10-07 11:38:40,435] INFO [source_cdc_signal_heartbeat|worker] Scheduled shutdown for WorkerConnector{id=source_cdc_signal_heartbeat} (org.apache.kafka.connect.runtime.WorkerConnector:294) [2025-10-07 11:38:40,435] INFO [source_cdc_signal_heartbeat|worker] Completed shutdown for WorkerConnector{id=source_cdc_signal_heartbeat} (org.apache.kafka.connect.runtime.WorkerConnector:314) [2025-10-07 11:38:40,436] INFO 10.11.57.201 - - [07/Oct/2025:06:38:40 +0000] "PUT /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3341 "-" "ReactorNetty/1.1.10" 67 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:40,436] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2077) [2025-10-07 11:38:40,437] INFO [source_cdc_signal_heartbeat|worker] Creating connector source_cdc_signal_heartbeat of type io.debezium.connector.informix.InformixConnector (org.apache.kafka.connect.runtime.Worker:312) [2025-10-07 11:38:40,437] INFO [source_cdc_signal_heartbeat|worker] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-10-07 11:38:40,437] INFO [source_cdc_signal_heartbeat|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:40,438] INFO [source_cdc_signal_heartbeat|worker] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-10-07 11:38:40,438] INFO [source_cdc_signal_heartbeat|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:40,439] INFO [source_cdc_signal_heartbeat|worker] Instantiated connector source_cdc_signal_heartbeat with version 3.2.3.Final of type class io.debezium.connector.informix.InformixConnector (org.apache.kafka.connect.runtime.Worker:334) [2025-10-07 11:38:40,439] INFO [source_cdc_signal_heartbeat|worker] Finished creating connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.Worker:355) [2025-10-07 11:38:40,440] INFO SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-10-07 11:38:40,440] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:40,441] INFO EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-10-07 11:38:40,441] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:40,451] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Tasks [source_cdc_signal_heartbeat-0] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2440) [2025-10-07 11:38:40,453] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Handling task config update by stopping tasks [source_cdc_signal_heartbeat-0], which will be restarted after rebalance if still assigned to this worker (org.apache.kafka.connect.runtime.distributed.DistributedHerder:784) [2025-10-07 11:38:40,454] INFO [source_cdc_signal_heartbeat|task-0] Stopping task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.Worker:1047) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,846] DEBUG [source_cdc_signal_heartbeat|task-0] no records available or batch size not reached yet, sleeping a bit... (io.debezium.connector.base.ChangeEventQueue:271) [2025-10-07 11:38:40,847] DEBUG [source_cdc_signal_heartbeat|task-0] checking for more records... (io.debezium.connector.base.ChangeEventQueue:279) [2025-10-07 11:38:40,847] INFO [source_cdc_signal_heartbeat|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:476) [2025-10-07 11:38:44,848] WARN [source_cdc_signal_heartbeat|task-0] Coordinator didn't stop in the expected time, shutting down executor now (io.debezium.pipeline.ChangeEventSourceCoordinator:379) [2025-10-07 11:38:45,455] ERROR [source_cdc_signal_heartbeat|task-0] Graceful stop of task source_cdc_signal_heartbeat-0 failed. (org.apache.kafka.connect.runtime.Worker:1074) [2025-10-07 11:38:45,457] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-10-07 11:38:45,457] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Proceeding to force close the producer since pending requests could not be completed within timeout 0 ms. (org.apache.kafka.clients.producer.KafkaProducer:1407) [2025-10-07 11:38:45,458] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:242) [2025-10-07 11:38:45,458] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:604) [2025-10-07 11:38:45,459] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Successfully joined group with generation Generation{generationId=87, memberId='connect-10.11.57.201:8083-e6f6bb3f-e5ea-4f85-825a-18c0417ba713', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:665) [2025-10-07 11:38:45,463] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:45,463] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:45,463] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:45,463] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:45,463] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:45,464] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Successfully synced group in generation Generation{generationId=87, memberId='connect-10.11.57.201:8083-e6f6bb3f-e5ea-4f85-825a-18c0417ba713', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:842) [2025-10-07 11:38:45,464] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Joined group at generation 87 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.11.57.201:8083-e6f6bb3f-e5ea-4f85-825a-18c0417ba713', leaderUrl='http://10.11.57.201:8083/', offset=2486, connectorIds=[SI_source_trans_requests, source_20_trans_request1s, source_20_trans_request3s, source_20_sumd_card_funds, SI_source_trans_request1s, source_20_trans_requests, source_cdc_signal_heartbeat], taskIds=[SI_source_trans_requests-0, source_20_trans_request1s-0, source_20_trans_request3s-0, source_20_sumd_card_funds-0, SI_source_trans_request1s-0, source_20_trans_requests-0, source_cdc_signal_heartbeat-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2621) [2025-10-07 11:38:45,465] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting connectors and tasks using config offset 2486 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1959) [2025-10-07 11:38:45,465] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2002) [2025-10-07 11:38:45,465] INFO [source_cdc_signal_heartbeat|task-0] Creating task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.Worker:645) [2025-10-07 11:38:45,466] INFO [source_cdc_signal_heartbeat|task-0] ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig:372) [2025-10-07 11:38:45,466] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:45,467] INFO [source_cdc_signal_heartbeat|task-0] TaskConfig values: task.class = class io.debezium.connector.informix.InformixConnectorTask (org.apache.kafka.connect.runtime.TaskConfig:372) [2025-10-07 11:38:45,468] INFO [source_cdc_signal_heartbeat|task-0] Instantiated task source_cdc_signal_heartbeat-0 with version 3.2.3.Final of type io.debezium.connector.informix.InformixConnectorTask (org.apache.kafka.connect.runtime.Worker:664) [2025-10-07 11:38:45,468] INFO [source_cdc_signal_heartbeat|task-0] AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.connect.avro.AvroConverterConfig:372) [2025-10-07 11:38:45,468] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:372) [2025-10-07 11:38:45,469] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] specific.avro.reader = false use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:372) [2025-10-07 11:38:45,469] INFO [source_cdc_signal_heartbeat|task-0] AvroDataConfig values: connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false schemas.cache.config = 1000 scrub.invalid.names = false (io.confluent.connect.avro.AvroDataConfig:372) [2025-10-07 11:38:45,469] INFO [source_cdc_signal_heartbeat|task-0] AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.connect.avro.AvroConverterConfig:372) [2025-10-07 11:38:45,469] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:372) [2025-10-07 11:38:45,470] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] specific.avro.reader = false use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:372) [2025-10-07 11:38:45,470] INFO [source_cdc_signal_heartbeat|task-0] AvroDataConfig values: connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false schemas.cache.config = 1000 scrub.invalid.names = false (io.confluent.connect.avro.AvroDataConfig:372) [2025-10-07 11:38:45,470] INFO [source_cdc_signal_heartbeat|task-0] Set up the key converter class io.confluent.connect.avro.AvroConverter for task source_cdc_signal_heartbeat-0 using the connector config (org.apache.kafka.connect.runtime.Worker:679) [2025-10-07 11:38:45,470] INFO [source_cdc_signal_heartbeat|task-0] Set up the value converter class io.confluent.connect.avro.AvroConverter for task source_cdc_signal_heartbeat-0 using the connector config (org.apache.kafka.connect.runtime.Worker:685) [2025-10-07 11:38:45,470] INFO [source_cdc_signal_heartbeat|task-0] Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task source_cdc_signal_heartbeat-0 using the worker config (org.apache.kafka.connect.runtime.Worker:690) [2025-10-07 11:38:45,471] INFO [source_cdc_signal_heartbeat|task-0] Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ExtractNewRecordState} (org.apache.kafka.connect.runtime.Worker:1794) [2025-10-07 11:38:45,471] INFO [source_cdc_signal_heartbeat|task-0] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-10-07 11:38:45,472] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:45,472] INFO [source_cdc_signal_heartbeat|task-0] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-10-07 11:38:45,473] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-10-07 11:38:45,473] INFO [source_cdc_signal_heartbeat|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [10.11.57.201:9092, 10.11.57.202:9092, 10.11.57.203:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-source_cdc_signal_heartbeat-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:372) [2025-10-07 11:38:45,473] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:45,476] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:381) [2025-10-07 11:38:45,476] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:45,476] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:45,476] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819125476 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:45,477] INFO [source_cdc_signal_heartbeat|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [10.11.57.201:9092, 10.11.57.202:9092, 10.11.57.203:9092] client.dns.lookup = use_all_dns_ips client.id = connector-adminclient-source_cdc_signal_heartbeat-0 connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:372) [2025-10-07 11:38:45,478] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[config.storage.topic, metrics.context.connect.group.id, group.id, status.storage.topic, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, log.cleanup.policy, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:381) [2025-10-07 11:38:45,478] INFO [source_cdc_signal_heartbeat|task-0] The mbean of App info: [kafka.admin.client], id: [connector-adminclient-source_cdc_signal_heartbeat-0] already exists, so skipping a new mbean creation. (org.apache.kafka.common.utils.AppInfoParser:65) [2025-10-07 11:38:45,478] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:45,480] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1988) [2025-10-07 11:38:45,480] DEBUG [source_cdc_signal_heartbeat|task-0] Setting task state to 'INITIAL', previous state was 'INITIAL' (io.debezium.connector.common.BaseSourceTask:596) [2025-10-07 11:38:45,480] DEBUG [source_cdc_signal_heartbeat|task-0] Calling init for connector informix and config {connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, database.dbname=cards_1952, topic.creation.default.compression.type=lz4, database.user=kafka, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.hostname=10.11.56.182, database.connection.retries=5, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081} (io.debezium.openlineage.DebeziumOpenLineageEmitter:58) [2025-10-07 11:38:45,480] DEBUG [source_cdc_signal_heartbeat|task-0] Emitter instance for connector informix: io.debezium.openlineage.emitter.NoOpLineageEmitter@7bf19a70 (io.debezium.openlineage.DebeziumOpenLineageEmitter:80) [2025-10-07 11:38:45,480] DEBUG [source_cdc_signal_heartbeat|task-0] Emitting lineage event for INITIAL (io.debezium.openlineage.emitter.NoOpLineageEmitter:38) [2025-10-07 11:38:45,481] INFO [source_cdc_signal_heartbeat|task-0] Starting InformixConnectorTask with configuration: connector.class = io.debezium.connector.informix.InformixConnector errors.log.include.messages = true topic.creation.default.partitions = 1 value.converter.schema.registry.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicNameStrategy key.converter.schema.registry.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicNameStrategy transforms = unwrap errors.deadletterqueue.context.headers.enable = true heartbeat.action.query = UPDATE cdc_signal_heartbeat SET ts = CURRENT transforms.unwrap.drop.tombstones = false topic.creation.default.replication.factor = 3 errors.deadletterqueue.topic.replication.factor = 3 transforms.unwrap.type = io.debezium.transforms.ExtractNewRecordState errors.log.enable = true key.converter = io.confluent.connect.avro.AvroConverter database.dbname = cards_1952 topic.creation.default.compression.type = lz4 database.user = kafka column.skip.list = cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters heartbeat.interval.ms = 1800000 schema.history.internal.kafka.bootstrap.servers = 10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092 value.converter.schema.registry.url = http://10.11.57.201:8081 schema.history.internal.kafka.topic.replication.factor = 3 errors.max.retries = 0 errors.deadletterqueue.topic.name = informix-gpdb-source-errors database.password = ******** name = source_cdc_signal_heartbeat errors.tolerance = none skipped.operations = d pk.mode = kafka snapshot.mode = schema_only max.queue.size = 100000 tasks.max = 1 retriable.restart.connector.wait.ms = 60000 database.connection.retry.interval.ms = 1000 schema.history.internal.store.only.captured.databases.ddl = true schema.history.internal.store.only.captured.tables.ddl = true tombstones.on.delete = true topic.prefix = inst_kafka_net_41 decimal.handling.mode = double schema.history.internal.kafka.topic = cards_1952_schema-history-trans_requests connection.pool.max.size = 50 value.converter = io.confluent.connect.avro.AvroConverter openlineage.integration.enabled = false topic.creation.default.cleanup.policy = compact time.precision.mode = connect database.server.name = inst_kafka_net_41 snapshot.isolation.mode = read_committed topic.creation.default.retention.ms = 604800000 database.port = 9260 schema.history.internal.kafka.recovery.poll.interval.ms = 30000 offset.flush.interval.ms = 10000 task.class = io.debezium.connector.informix.InformixConnectorTask database.hostname = 10.11.56.182 database.connection.retries = 5 table.include.list = cards_1952.mcp.cdc_signal_heartbeat key.converter.schema.registry.url = http://10.11.57.201:8081 (io.debezium.connector.common.BaseSourceTask:257) [2025-10-07 11:38:45,481] INFO [source_cdc_signal_heartbeat|task-0] Loading the custom source info struct maker plugin: io.debezium.connector.informix.InformixSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1929) [2025-10-07 11:38:45,482] INFO [source_cdc_signal_heartbeat|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.SchemaTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:1617) [2025-10-07 11:38:45,483] INFO 10.11.57.201 - - [07/Oct/2025:06:38:40 +0000] "GET /connectors/source_cdc_signal_heartbeat HTTP/1.1" 200 3341 "-" "ReactorNetty/1.1.10" 5039 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:45,483] INFO 10.11.57.201 - - [07/Oct/2025:06:38:40 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks HTTP/1.1" 200 3353 "-" "ReactorNetty/1.1.10" 5038 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:45,483] INFO 10.11.57.201 - - [07/Oct/2025:06:38:40 +0000] "GET /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3214 "-" "ReactorNetty/1.1.10" 5028 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:45,483] INFO [source_cdc_signal_heartbeat|task-0] KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=inst_kafka_net_41-schemahistory, bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=inst_kafka_net_41-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:249) [2025-10-07 11:38:45,484] INFO [source_cdc_signal_heartbeat|task-0] KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=inst_kafka_net_41-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:250) [2025-10-07 11:38:45,484] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = db-history-config-check (io.debezium.util.Threads:273) [2025-10-07 11:38:45,484] WARN [source_cdc_signal_heartbeat|task-0] Unable to register metrics as an old set with the same name: 'debezium.informix_server:type=connector-metrics,context=schema-history,server=inst_kafka_net_41' exists, retrying in PT5S (attempt 1 out of 12) (io.debezium.pipeline.JmxUtils:55) [2025-10-07 11:38:45,485] INFO 10.11.57.201 - - [07/Oct/2025:06:38:45 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks/0/status HTTP/1.1" 200 58 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:45,485] INFO 10.11.57.201 - - [07/Oct/2025:06:38:45 +0000] "GET /connectors/source_cdc_signal_heartbeat/status HTTP/1.1" 200 187 "-" "ReactorNetty/1.1.10" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-10-07 11:38:48,849] INFO [source_cdc_signal_heartbeat|task-0] SignalProcessor stopped (io.debezium.pipeline.signal.SignalProcessor:122) [2025-10-07 11:38:48,850] INFO [source_cdc_signal_heartbeat|task-0] Debezium ServiceRegistry stopped. (io.debezium.service.DefaultServiceRegistry:105) [2025-10-07 11:38:48,850] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-10-07 11:38:48,850] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-10-07 11:38:48,851] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-10-07 11:38:48,851] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-10-07 11:38:48,851] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-10-07 11:38:50,485] WARN [source_cdc_signal_heartbeat|task-0] Unable to register metrics as an old set with the same name: 'debezium.informix_server:type=connector-metrics,context=schema-history,server=inst_kafka_net_41' exists, retrying in PT5S (attempt 2 out of 12) (io.debezium.pipeline.JmxUtils:55) [2025-10-07 11:38:54,555] ERROR [source_cdc_signal_heartbeat|task-0] Caught Exception (io.debezium.connector.informix.InformixStreamingChangeEventSource:212) com.informix.stream.impl.IfxStreamException: Unable to end cdc capture at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:422) at com.informix.stream.cdc.IfxCDCEngine.unwatchTable(IfxCDCEngine.java:402) at com.informix.stream.cdc.IfxCDCEngine.close(IfxCDCEngine.java:470) at io.debezium.connector.informix.InformixCdcTransactionEngine.close(InformixCdcTransactionEngine.java:181) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:205) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:37) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:326) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:207) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:147) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF at com.informix.util.IfxErrMsg.buildExceptionWithMessage(IfxErrMsg.java:424) at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:399) at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:381) at com.informix.jdbc.IfxResultSet.getMetaData(IfxResultSet.java:902) at com.informix.jdbc.IfxResultSet.executeQuery(IfxResultSet.java:187) at com.informix.jdbc.IfxStatement.executeQueryImpl(IfxStatement.java:909) at com.informix.jdbc.IfxPreparedStatement.executeQuery(IfxPreparedStatement.java:296) at com.informix.jdbc.IfxCallableStatement.executeQuery(IfxCallableStatement.java:226) at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:413) ... 13 more [2025-10-07 11:38:54,556] ERROR [source_cdc_signal_heartbeat|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:52) com.informix.stream.impl.IfxStreamException: Unable to end cdc capture at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:422) at com.informix.stream.cdc.IfxCDCEngine.unwatchTable(IfxCDCEngine.java:402) at com.informix.stream.cdc.IfxCDCEngine.close(IfxCDCEngine.java:470) at io.debezium.connector.informix.InformixCdcTransactionEngine.close(InformixCdcTransactionEngine.java:181) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:205) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:37) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:326) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:207) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:147) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF at com.informix.util.IfxErrMsg.buildExceptionWithMessage(IfxErrMsg.java:424) at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:399) at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:381) at com.informix.jdbc.IfxResultSet.getMetaData(IfxResultSet.java:902) at com.informix.jdbc.IfxResultSet.executeQuery(IfxResultSet.java:187) at com.informix.jdbc.IfxStatement.executeQueryImpl(IfxStatement.java:909) at com.informix.jdbc.IfxPreparedStatement.executeQuery(IfxPreparedStatement.java:296) at com.informix.jdbc.IfxCallableStatement.executeQuery(IfxCallableStatement.java:226) at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:413) ... 13 more [2025-10-07 11:38:54,556] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-10-07 11:38:54,556] ERROR [source_cdc_signal_heartbeat|task-0] The maximum number of 0 retries has been attempted (io.debezium.pipeline.ErrorHandler:129) [2025-10-07 11:38:54,556] INFO [source_cdc_signal_heartbeat|task-0] Finished streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:327) [2025-10-07 11:38:54,556] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-10-07 11:38:54,556] INFO [source_cdc_signal_heartbeat|task-0] Connected metrics set to 'false' (io.debezium.pipeline.ChangeEventSourceCoordinator:492) [2025-10-07 11:38:54,559] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:54,559] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:54,559] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:54,559] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:54,559] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:54,559] DEBUG [source_cdc_signal_heartbeat|task-0] Setting task state to 'STOPPED', previous state was 'RUNNING' (io.debezium.connector.common.BaseSourceTask:596) [2025-10-07 11:38:54,560] DEBUG [source_cdc_signal_heartbeat|task-0] Emitting lineage event for STOPPED (io.debezium.openlineage.emitter.NoOpLineageEmitter:38) [2025-10-07 11:38:54,565] DEBUG [source_cdc_signal_heartbeat|task-0] Cleaned up emitter for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT where id = 1, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, database.dbname=cards_1952, topic.creation.default.compression.type=lz4, database.user=kafka, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, heartbeat.interval.ms=1800000, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.connection.retries=5, database.hostname=10.11.56.182, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}] (io.debezium.openlineage.DebeziumOpenLineageEmitter:92) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:54,565] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for connector-adminclient-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:54,566] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:54,566] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:54,567] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,486] INFO [source_cdc_signal_heartbeat|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer (org.apache.kafka.clients.producer.ProducerConfig:372) [2025-10-07 11:38:55,487] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,489] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,489] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,489] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135489 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,489] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,490] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,493] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,493] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,493] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135492 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,493] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,496] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,499] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,501] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,504] INFO [source_cdc_signal_heartbeat|task-0] Found previous partition offset InformixPartition [sourcePartition={databaseName=inst_kafka_net_41}]: {begin_lsn=769374671650840, commit_lsn=769374671651056, change_lsn=769374671650976} (io.debezium.connector.common.BaseSourceTask:576) [2025-10-07 11:38:55,507] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,507] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,509] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,509] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,509] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135509 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,512] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,514] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:55,514] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,514] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,514] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,515] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,515] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,516] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,516] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,516] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,519] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,519] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,519] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135519 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,519] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-db-history-config-check (io.debezium.util.Threads:290) [2025-10-07 11:38:55,519] INFO [source_cdc_signal_heartbeat|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:372) [2025-10-07 11:38:55,521] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[enable.idempotence, value.serializer, batch.size, max.in.flight.requests.per.connection, buffer.memory, key.serializer]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:381) [2025-10-07 11:38:55,522] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,522] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,522] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135522 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,523] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,525] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:55,525] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,526] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,526] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,526] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,526] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,527] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,527] INFO [source_cdc_signal_heartbeat|task-0] Database schema history topic 'cards_1952_schema-history-trans_requests' has correct settings (io.debezium.storage.kafka.history.KafkaSchemaHistory:492) [2025-10-07 11:38:55,528] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for inst_kafka_net_41-schemahistory-topic-check unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,529] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,529] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,529] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,547] DEBUG [source_cdc_signal_heartbeat|task-0] Connected to jdbc:informix-sqli://10.11.56.182:9260/cards_1952:user=kafka;password=Lahore@556677 with {server.name=inst_kafka_net_41, connection.retries=5, connection.retry.interval.ms=1000} (io.debezium.jdbc.JdbcConnection:256) [2025-10-07 11:38:55,550] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = SignalProcessor (io.debezium.util.Threads:273) [2025-10-07 11:38:55,551] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = change-event-source-coordinator (io.debezium.util.Threads:273) [2025-10-07 11:38:55,551] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = blocking-snapshot (io.debezium.util.Threads:273) [2025-10-07 11:38:55,551] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-change-event-source-coordinator (io.debezium.util.Threads:290) [2025-10-07 11:38:55,552] DEBUG [source_cdc_signal_heartbeat|task-0] Setting task state to 'RUNNING', previous state was 'INITIAL' (io.debezium.connector.common.BaseSourceTask:596) [2025-10-07 11:38:55,552] INFO [source_cdc_signal_heartbeat|task-0] Metrics registered (io.debezium.pipeline.ChangeEventSourceCoordinator:137) [2025-10-07 11:38:55,552] INFO [source_cdc_signal_heartbeat|task-0] Context created (io.debezium.pipeline.ChangeEventSourceCoordinator:140) [2025-10-07 11:38:55,552] ERROR [source_cdc_signal_heartbeat|task-0] WorkerSourceTask{id=source_cdc_signal_heartbeat-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:233) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, database.dbname=cards_1952, topic.creation.default.compression.type=lz4, database.user=kafka, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.hostname=10.11.56.182, database.connection.retries=5, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:108) at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:263) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:278) at org.apache.kafka.connect.runtime.WorkerTask.doStart(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:224) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:280) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:78) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-10-07 11:38:55,553] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,553] INFO [source_cdc_signal_heartbeat|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:476) [2025-10-07 11:38:55,553] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,555] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,555] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,555] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135555 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,558] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,560] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,561] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,562] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,562] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,564] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,564] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,564] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135564 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,567] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,569] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:55,569] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,570] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:55,570] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,570] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:55,570] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:55,571] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:55,572] INFO [source_cdc_signal_heartbeat|task-0] Started database schema history recovery (io.debezium.relational.history.SchemaHistoryMetrics:115) [2025-10-07 11:38:55,572] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-10-07 11:38:55,572] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-10-07 11:38:55,574] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-10-07 11:38:55,575] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-10-07 11:38:55,575] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759819135574 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-10-07 11:38:55,575] DEBUG [source_cdc_signal_heartbeat|task-0] Subscribing to database schema history topic 'cards_1952_schema-history-trans_requests' (io.debezium.storage.kafka.history.KafkaSchemaHistory:310) [2025-10-07 11:38:55,575] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Subscribed to topic(s): cards_1952_schema-history-trans_requests (org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer:476) [2025-10-07 11:38:55,578] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-10-07 11:38:55,580] DEBUG [source_cdc_signal_heartbeat|task-0] End offset of database schema history topic is 169 (io.debezium.storage.kafka.history.KafkaSchemaHistory:319) [2025-10-07 11:38:55,581] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Discovered group coordinator 10.11.57.203:9092 (id: 2147483644 rack: null) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:936) [2025-10-07 11:38:55,581] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:604) [2025-10-07 11:38:55,584] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: need to re-join with the given member-id: inst_kafka_net_41-schemahistory-d36e56b6-4cf2-4418-83ca-71946551a27b (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:55,584] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:604) [2025-10-07 11:38:55,585] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Successfully joined group with generation Generation{generationId=7, memberId='inst_kafka_net_41-schemahistory-d36e56b6-4cf2-4418-83ca-71946551a27b', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:665) [2025-10-07 11:38:55,585] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Finished assignment for group at generation 7: {inst_kafka_net_41-schemahistory-d36e56b6-4cf2-4418-83ca-71946551a27b=Assignment(partitions=[cards_1952_schema-history-trans_requests-0])} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:663) [2025-10-07 11:38:55,587] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Successfully synced group in generation Generation{generationId=7, memberId='inst_kafka_net_41-schemahistory-d36e56b6-4cf2-4418-83ca-71946551a27b', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:842) [2025-10-07 11:38:55,588] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Notifying assignor about the new Assignment(partitions=[cards_1952_schema-history-trans_requests-0]) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:323) [2025-10-07 11:38:55,588] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Adding newly assigned partitions: cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:57) [2025-10-07 11:38:55,589] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Found no committed offset for partition cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1506) [2025-10-07 11:38:55,589] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting offset for partition cards_1952_schema-history-trans_requests-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.11.57.201:9092 (id: 1 rack: null)], epoch=4}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:398) [2025-10-07 11:38:55,605] INFO [source_cdc_signal_heartbeat|task-0] Database schema history recovery in progress, recovered 1 records (io.debezium.relational.history.SchemaHistoryMetrics:130) [2025-10-07 11:38:55,606] INFO [source_cdc_signal_heartbeat|task-0] Already applied 1 database changes (io.debezium.relational.history.SchemaHistoryMetrics:140) [2025-10-07 11:38:55,660] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 23 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,717] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 31 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,753] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 25 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,760] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 3 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,783] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 19 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,811] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 16 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,825] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 30 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:55,846] DEBUG [source_cdc_signal_heartbeat|task-0] Processed 22 records from database schema history (io.debezium.storage.kafka.history.KafkaSchemaHistory:371) [2025-10-07 11:38:56,329] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Revoke previously assigned partitions cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:79) [2025-10-07 11:38:56,330] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Member inst_kafka_net_41-schemahistory-d36e56b6-4cf2-4418-83ca-71946551a27b sending LeaveGroup request to coordinator 10.11.57.203:9092 (id: 2147483644 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1173) [2025-10-07 11:38:56,330] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-10-07 11:38:56,330] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-10-07 11:38:56,331] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:56,332] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,332] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,332] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:56,334] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:56,334] INFO [source_cdc_signal_heartbeat|task-0] Finished database schema history recovery of 169 change(s) in 762 ms (io.debezium.relational.history.SchemaHistoryMetrics:121) [2025-10-07 11:38:56,334] DEBUG [source_cdc_signal_heartbeat|task-0] Mapping table 'cards_1952.mcp.cdc_signal_heartbeat' to schemas under 'inst_kafka_net_41.mcp.cdc_signal_heartbeat' (io.debezium.relational.TableSchemaBuilder:181) [2025-10-07 11:38:56,334] DEBUG [source_cdc_signal_heartbeat|task-0] Building schema for column id of type 4 named serial with constraints (10,Optional[0]) (io.debezium.connector.informix.InformixValueConverters:56) [2025-10-07 11:38:56,334] DEBUG [source_cdc_signal_heartbeat|task-0] JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'id' (io.debezium.connector.informix.InformixValueConverters:69) [2025-10-07 11:38:56,334] DEBUG [source_cdc_signal_heartbeat|task-0] - field 'id' (INT32) from column id serial(10, 0) NOT NULL AUTO_INCREMENTED (io.debezium.relational.TableSchemaBuilder:476) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] Building schema for column id of type 4 named serial with constraints (10,Optional[0]) (io.debezium.connector.informix.InformixValueConverters:56) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'id' (io.debezium.connector.informix.InformixValueConverters:69) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] - field 'id' (INT32) from column id serial(10, 0) NOT NULL AUTO_INCREMENTED (io.debezium.relational.TableSchemaBuilder:476) [2025-10-07 11:38:56,335] INFO [source_cdc_signal_heartbeat|task-0] Parsing default value for column 'ts' with expression 'current' (io.debezium.connector.informix.InformixDefaultValueConverter:50) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] Building schema for column ts of type 93 named datetime year to fraction(3) with constraints (23,Optional[0]) (io.debezium.connector.informix.InformixValueConverters:56) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] JdbcValueConverters returned 'org.apache.kafka.connect.data.SchemaBuilder' for column 'ts' (io.debezium.connector.informix.InformixValueConverters:69) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] - field 'ts' (INT64) from column ts datetime year to fraction(3)(23, 0) DEFAULT VALUE current (io.debezium.relational.TableSchemaBuilder:476) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] Mapped primary key for table 'cards_1952.mcp.cdc_signal_heartbeat' to schema: {"name" : "inst_kafka_net_41.mcp.cdc_signal_heartbeat.Key", "type" : "STRUCT", "optional" : "false", "default" : null, "fields" : [{"name" : "id", "index" : "0", "schema" : {"type" : "INT32", "optional" : "false", "default" : null}}]} (io.debezium.relational.TableSchemaBuilder:207) [2025-10-07 11:38:56,335] DEBUG [source_cdc_signal_heartbeat|task-0] Mapped columns for table 'cards_1952.mcp.cdc_signal_heartbeat' to schema: {"name" : "inst_kafka_net_41.mcp.cdc_signal_heartbeat.Value", "type" : "STRUCT", "optional" : "true", "default" : null, "fields" : [{"name" : "id", "index" : "0", "schema" : {"type" : "INT32", "optional" : "false", "default" : null}}, {"name" : "ts", "index" : "1", "schema" : {"name" : "org.apache.kafka.connect.data.Timestamp", "type" : "INT64", "optional" : "true", "default" : null, "version" : "1"}}]} (io.debezium.relational.TableSchemaBuilder:208) [2025-10-07 11:38:56,336] ERROR [source_cdc_signal_heartbeat|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:52) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, database.dbname=cards_1952, topic.creation.default.compression.type=lz4, database.user=kafka, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.hostname=10.11.56.182, database.connection.retries=5, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:136) at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:132) at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:68) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:143) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-10-07 11:38:56,336] INFO [source_cdc_signal_heartbeat|task-0] Connected metrics set to 'false' (io.debezium.pipeline.ChangeEventSourceCoordinator:492) [2025-10-07 11:38:56,336] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-SignalProcessor (io.debezium.util.Threads:290) [2025-10-07 11:38:56,337] INFO [source_cdc_signal_heartbeat|task-0] SignalProcessor stopped (io.debezium.pipeline.signal.SignalProcessor:122) [2025-10-07 11:38:56,337] INFO [source_cdc_signal_heartbeat|task-0] Debezium ServiceRegistry stopped. (io.debezium.service.DefaultServiceRegistry:105) [2025-10-07 11:38:56,337] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-10-07 11:38:56,337] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-10-07 11:38:56,338] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-10-07 11:38:56,338] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-10-07 11:38:56,341] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:56,341] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,341] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,341] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:56,341] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:56,341] DEBUG [source_cdc_signal_heartbeat|task-0] Setting task state to 'STOPPED', previous state was 'RUNNING' (io.debezium.connector.common.BaseSourceTask:596) [2025-10-07 11:38:56,342] WARN [source_cdc_signal_heartbeat|task-0] Failed to close source task with type org.apache.kafka.connect.runtime.AbstractWorkerSourceTask$$Lambda$1689/0x00007f2e24ca8490 (org.apache.kafka.common.utils.Utils:1119) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, database.dbname=cards_1952, topic.creation.default.compression.type=lz4, database.user=kafka, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.hostname=10.11.56.182, database.connection.retries=5, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:108) at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:501) at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:464) at org.apache.kafka.common.utils.Utils.closeQuietly(Utils.java:1117) at org.apache.kafka.common.utils.Utils.closeQuietly(Utils.java:1100) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.close(AbstractWorkerSourceTask.java:312) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:202) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:237) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:280) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:78) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-10-07 11:38:56,342] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:56,344] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for connector-adminclient-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-10-07 11:38:56,345] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-10-07 11:38:56,345] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-10-07 11:38:56,345] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-10-07 11:39:00,206] INFO 10.11.56.164 - - [07/Oct/2025:06:39:00 +0000] "GET / HTTP/1.1" 200 119 "-" "axios/1.12.2" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62)