[2025-09-29 15:24:03,119] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Connector source_cdc_signal_heartbeat config updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2425) [2025-09-29 15:24:03,122] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Handling connector-only config update by restarting connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.distributed.DistributedHerder:716) [2025-09-29 15:24:03,122] INFO [source_cdc_signal_heartbeat|worker] Stopping connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.Worker:451) [2025-09-29 15:24:03,122] INFO [source_cdc_signal_heartbeat|worker] Scheduled shutdown for WorkerConnector{id=source_cdc_signal_heartbeat} (org.apache.kafka.connect.runtime.WorkerConnector:294) [2025-09-29 15:24:03,122] INFO 10.11.57.201 - - [29/Sep/2025:10:24:03 +0000] "PUT /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3354 "-" "ReactorNetty/1.1.10" 30 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:03,123] INFO [source_cdc_signal_heartbeat|worker] Completed shutdown for WorkerConnector{id=source_cdc_signal_heartbeat} (org.apache.kafka.connect.runtime.WorkerConnector:314) [2025-09-29 15:24:03,123] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2077) [2025-09-29 15:24:03,123] INFO [source_cdc_signal_heartbeat|worker] Creating connector source_cdc_signal_heartbeat of type io.debezium.connector.informix.InformixConnector (org.apache.kafka.connect.runtime.Worker:312) [2025-09-29 15:24:03,123] INFO [source_cdc_signal_heartbeat|worker] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-09-29 15:24:03,124] INFO [source_cdc_signal_heartbeat|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:03,124] INFO [source_cdc_signal_heartbeat|worker] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-09-29 15:24:03,124] INFO [source_cdc_signal_heartbeat|worker] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:03,124] INFO [source_cdc_signal_heartbeat|worker] Instantiated connector source_cdc_signal_heartbeat with version 3.2.3.Final of type class io.debezium.connector.informix.InformixConnector (org.apache.kafka.connect.runtime.Worker:334) [2025-09-29 15:24:03,124] INFO [source_cdc_signal_heartbeat|worker] Finished creating connector source_cdc_signal_heartbeat (org.apache.kafka.connect.runtime.Worker:355) [2025-09-29 15:24:03,125] INFO SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-09-29 15:24:03,125] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:03,125] INFO EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-09-29 15:24:03,126] INFO EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:03,131] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Tasks [source_cdc_signal_heartbeat-0] configs updated (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2440) [2025-09-29 15:24:03,132] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Handling task config update by stopping tasks [source_cdc_signal_heartbeat-0], which will be restarted after rebalance if still assigned to this worker (org.apache.kafka.connect.runtime.distributed.DistributedHerder:784) [2025-09-29 15:24:03,132] INFO [source_cdc_signal_heartbeat|task-0] Stopping task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.Worker:1047) [2025-09-29 15:24:03,310] INFO 10.11.0.20 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/source_mcp_852_trans_request3s/status HTTP/1.1" 404 91 "-" "axios/1.9.0" 1 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:03,403] INFO [sink_gpdb_dev_trans_request3s|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:03,500] INFO [source_cdc_signal_heartbeat|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:476) [2025-09-29 15:24:03,710] INFO 10.11.56.164 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/source_mcp_852_ach_batches1_ach_files_activity_log1s_activity_logs_activity_sessions/status HTTP/1.1" 404 145 "-" "axios/1.12.2" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:03,715] INFO 10.11.56.164 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/name/status HTTP/1.1" 200 164 "-" "axios/1.12.2" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:05,974] INFO [sink_trans_requests|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:05,975] INFO [sink_sumd_card_funds|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:07,500] WARN [source_cdc_signal_heartbeat|task-0] Coordinator didn't stop in the expected time, shutting down executor now (io.debezium.pipeline.ChangeEventSourceCoordinator:379) [2025-09-29 15:24:08,132] ERROR [source_cdc_signal_heartbeat|task-0] Graceful stop of task source_cdc_signal_heartbeat-0 failed. (org.apache.kafka.connect.runtime.Worker:1074) [2025-09-29 15:24:08,133] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-09-29 15:24:08,133] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Proceeding to force close the producer since pending requests could not be completed within timeout 0 ms. (org.apache.kafka.clients.producer.KafkaProducer:1407) [2025-09-29 15:24:08,133] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Rebalance started (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:242) [2025-09-29 15:24:08,133] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] (Re-)joining group (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:604) [2025-09-29 15:24:08,135] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Successfully joined group with generation Generation{generationId=973, memberId='connect-10.11.57.201:8083-8a17a372-3428-4f55-a55d-d0cbeaeba9a7', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:665) [2025-09-29 15:24:08,135] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:08,135] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:08,135] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:08,135] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:08,135] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:08,137] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Successfully synced group in generation Generation{generationId=973, memberId='connect-10.11.57.201:8083-8a17a372-3428-4f55-a55d-d0cbeaeba9a7', protocol='sessioned'} (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:842) [2025-09-29 15:24:08,138] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Joined group at generation 973 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.11.57.201:8083-8a17a372-3428-4f55-a55d-d0cbeaeba9a7', leaderUrl='http://10.11.57.201:8083/', offset=2030, connectorIds=[name, sink_gpdb_dev_trans_request3s, source_cdc_signal_heartbeat, source_19_trans_request1s, sink_sumd_card_funds, source_19_trans_request3s, source_19_sumd_card_funds, source_19_trans_requests, sink_trans_request1s, sink_trans_requests], taskIds=[name-0, sink_gpdb_dev_trans_request3s-0, source_cdc_signal_heartbeat-0, source_19_trans_request1s-0, sink_sumd_card_funds-0, source_19_trans_request3s-0, source_19_sumd_card_funds-0, source_19_trans_requests-0, sink_trans_request1s-0, sink_trans_requests-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2621) [2025-09-29 15:24:08,138] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting connectors and tasks using config offset 2030 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1959) [2025-09-29 15:24:08,138] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Starting task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2002) [2025-09-29 15:24:08,138] INFO [source_cdc_signal_heartbeat|task-0] Creating task source_cdc_signal_heartbeat-0 (org.apache.kafka.connect.runtime.Worker:645) [2025-09-29 15:24:08,138] INFO [source_cdc_signal_heartbeat|task-0] ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig:372) [2025-09-29 15:24:08,139] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:08,139] INFO [source_cdc_signal_heartbeat|task-0] TaskConfig values: task.class = class io.debezium.connector.informix.InformixConnectorTask (org.apache.kafka.connect.runtime.TaskConfig:372) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] Instantiated task source_cdc_signal_heartbeat-0 with version 3.2.3.Final of type io.debezium.connector.informix.InformixConnectorTask (org.apache.kafka.connect.runtime.Worker:664) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.connect.avro.AvroConverterConfig:372) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:372) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] specific.avro.reader = false use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:372) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] AvroDataConfig values: connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false schemas.cache.config = 1000 scrub.invalid.names = false (io.confluent.connect.avro.AvroDataConfig:372) [2025-09-29 15:24:08,140] INFO [source_cdc_signal_heartbeat|task-0] AvroConverterConfig values: auto.register.schemas = true basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.connect.avro.AvroConverterConfig:372) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroSerializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.remove.java.properties = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:372) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] KafkaAvroDeserializerConfig values: auto.register.schemas = true avro.reflection.allow.null = false avro.use.logical.type.converters = false basic.auth.credentials.source = URL basic.auth.user.info = [hidden] bearer.auth.credentials.source = STATIC_TOKEN bearer.auth.token = [hidden] context.name.strategy = class io.confluent.kafka.serializers.context.NullContextNameStrategy id.compatibility.strict = true key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy latest.compatibility.strict = true max.schemas.per.subject = 1000 normalize.schemas = false proxy.host = proxy.port = -1 schema.reflection = false schema.registry.basic.auth.user.info = [hidden] schema.registry.ssl.cipher.suites = null schema.registry.ssl.enabled.protocols = [TLSv1.2, TLSv1.3] schema.registry.ssl.endpoint.identification.algorithm = https schema.registry.ssl.engine.factory.class = null schema.registry.ssl.key.password = null schema.registry.ssl.keymanager.algorithm = SunX509 schema.registry.ssl.keystore.certificate.chain = null schema.registry.ssl.keystore.key = null schema.registry.ssl.keystore.location = null schema.registry.ssl.keystore.password = null schema.registry.ssl.keystore.type = JKS schema.registry.ssl.protocol = TLSv1.3 schema.registry.ssl.provider = null schema.registry.ssl.secure.random.implementation = null schema.registry.ssl.trustmanager.algorithm = PKIX schema.registry.ssl.truststore.certificates = null schema.registry.ssl.truststore.location = null schema.registry.ssl.truststore.password = null schema.registry.ssl.truststore.type = JKS schema.registry.url = [http://10.11.57.201:8081] specific.avro.reader = false use.latest.version = false use.schema.id = -1 value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:372) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] AvroDataConfig values: connect.meta.data = true discard.type.doc.default = false enhanced.avro.schema.support = false schemas.cache.config = 1000 scrub.invalid.names = false (io.confluent.connect.avro.AvroDataConfig:372) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] Set up the key converter class io.confluent.connect.avro.AvroConverter for task source_cdc_signal_heartbeat-0 using the connector config (org.apache.kafka.connect.runtime.Worker:679) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] Set up the value converter class io.confluent.connect.avro.AvroConverter for task source_cdc_signal_heartbeat-0 using the connector config (org.apache.kafka.connect.runtime.Worker:685) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task source_cdc_signal_heartbeat-0 using the worker config (org.apache.kafka.connect.runtime.Worker:690) [2025-09-29 15:24:08,141] INFO [source_cdc_signal_heartbeat|task-0] Initializing: org.apache.kafka.connect.runtime.TransformationChain{io.debezium.transforms.ExtractNewRecordState} (org.apache.kafka.connect.runtime.Worker:1794) [2025-09-29 15:24:08,142] INFO [source_cdc_signal_heartbeat|task-0] SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig:372) [2025-09-29 15:24:08,142] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:08,142] INFO [source_cdc_signal_heartbeat|task-0] EnrichedSourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.SourceConnectorConfig$EnrichedSourceConnectorConfig:372) [2025-09-29 15:24:08,142] INFO [source_cdc_signal_heartbeat|task-0] EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = true errors.log.include.messages = true errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = class io.confluent.connect.avro.AvroConverter name = source_cdc_signal_heartbeat offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.default.exclude = [] topic.creation.default.include = [.*] topic.creation.default.partitions = 1 topic.creation.default.replication.factor = 3 topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [unwrap] transforms.unwrap.add.fields = [] transforms.unwrap.add.fields.prefix = __ transforms.unwrap.add.headers = [] transforms.unwrap.add.headers.prefix = __ transforms.unwrap.delete.tombstone.handling.mode = tombstone transforms.unwrap.drop.fields.from.key = false transforms.unwrap.drop.fields.header.name = null transforms.unwrap.drop.fields.keep.schema.compatible = true transforms.unwrap.negate = false transforms.unwrap.predicate = null transforms.unwrap.route.by.field = transforms.unwrap.type = class io.debezium.transforms.ExtractNewRecordState value.converter = class io.confluent.connect.avro.AvroConverter (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:372) [2025-09-29 15:24:08,142] INFO [source_cdc_signal_heartbeat|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [10.11.57.201:9092, 10.11.57.202:9092, 10.11.57.203:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-source_cdc_signal_heartbeat-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer (org.apache.kafka.clients.producer.ProducerConfig:372) [2025-09-29 15:24:08,143] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:08,144] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:381) [2025-09-29 15:24:08,144] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:08,144] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:08,144] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141448144 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:08,145] INFO [source_cdc_signal_heartbeat|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [10.11.57.201:9092, 10.11.57.202:9092, 10.11.57.203:9092] client.dns.lookup = use_all_dns_ips client.id = connector-adminclient-source_cdc_signal_heartbeat-0 connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:372) [2025-09-29 15:24:08,145] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[config.storage.topic, metrics.context.connect.group.id, group.id, status.storage.topic, plugin.path, config.storage.replication.factor, offset.flush.interval.ms, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, log.cleanup.policy, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:381) [2025-09-29 15:24:08,145] INFO [source_cdc_signal_heartbeat|task-0] The mbean of App info: [kafka.admin.client], id: [connector-adminclient-source_cdc_signal_heartbeat-0] already exists, so skipping a new mbean creation. (org.apache.kafka.common.utils.AppInfoParser:65) [2025-09-29 15:24:08,146] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:08,146] INFO [source_cdc_signal_heartbeat|task-0] Starting InformixConnectorTask with configuration: connector.class = io.debezium.connector.informix.InformixConnector errors.log.include.messages = true topic.creation.default.partitions = 1 value.converter.schema.registry.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicNameStrategy key.converter.schema.registry.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicNameStrategy transforms = unwrap errors.deadletterqueue.context.headers.enable = true heartbeat.action.query = UPDATE cdc_signal_heartbeat SET ts = CURRENT where id = 1 transforms.unwrap.drop.tombstones = false topic.creation.default.replication.factor = 3 errors.deadletterqueue.topic.replication.factor = 3 transforms.unwrap.type = io.debezium.transforms.ExtractNewRecordState errors.log.enable = true key.converter = io.confluent.connect.avro.AvroConverter topic.creation.default.compression.type = lz4 database.user = kafka database.dbname = cards_1952 column.skip.list = cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters heartbeat.interval.ms = 1800000 schema.history.internal.kafka.bootstrap.servers = 10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092 value.converter.schema.registry.url = http://10.11.57.201:8081 schema.history.internal.kafka.topic.replication.factor = 3 errors.max.retries = 0 errors.deadletterqueue.topic.name = informix-gpdb-source-errors database.password = ******** name = source_cdc_signal_heartbeat errors.tolerance = none skipped.operations = d pk.mode = kafka snapshot.mode = schema_only max.queue.size = 100000 tasks.max = 1 retriable.restart.connector.wait.ms = 60000 database.connection.retry.interval.ms = 1000 schema.history.internal.store.only.captured.databases.ddl = true schema.history.internal.store.only.captured.tables.ddl = true tombstones.on.delete = true topic.prefix = inst_kafka_net_41 decimal.handling.mode = double schema.history.internal.kafka.topic = cards_1952_schema-history-trans_requests connection.pool.max.size = 50 value.converter = io.confluent.connect.avro.AvroConverter openlineage.integration.enabled = false topic.creation.default.cleanup.policy = compact time.precision.mode = connect database.server.name = inst_kafka_net_41 snapshot.isolation.mode = read_committed topic.creation.default.retention.ms = 604800000 database.port = 9260 schema.history.internal.kafka.recovery.poll.interval.ms = 30000 offset.flush.interval.ms = 10000 task.class = io.debezium.connector.informix.InformixConnectorTask database.connection.retries = 5 database.hostname = 10.11.56.182 table.include.list = cards_1952.mcp.cdc_signal_heartbeat key.converter.schema.registry.url = http://10.11.57.201:8081 (io.debezium.connector.common.BaseSourceTask:257) [2025-09-29 15:24:08,147] INFO [source_cdc_signal_heartbeat|task-0] Loading the custom source info struct maker plugin: io.debezium.connector.informix.InformixSourceInfoStructMaker (io.debezium.config.CommonConnectorConfig:1929) [2025-09-29 15:24:08,147] INFO [Worker clientId=connect-10.11.57.201:8083, groupId=connect-cluster-dev] Finished starting connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1988) [2025-09-29 15:24:08,147] INFO 10.11.57.201 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/source_cdc_signal_heartbeat HTTP/1.1" 200 3354 "-" "ReactorNetty/1.1.10" 5017 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,147] INFO [source_cdc_signal_heartbeat|task-0] Loading the custom topic naming strategy plugin: io.debezium.schema.SchemaTopicNamingStrategy (io.debezium.config.CommonConnectorConfig:1617) [2025-09-29 15:24:08,147] INFO 10.11.57.201 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks HTTP/1.1" 200 3366 "-" "ReactorNetty/1.1.10" 5016 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,147] INFO 10.11.57.201 - - [29/Sep/2025:10:24:03 +0000] "GET /connectors/source_cdc_signal_heartbeat/config HTTP/1.1" 200 3227 "-" "ReactorNetty/1.1.10" 5015 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,147] INFO 10.11.0.20 - - [29/Sep/2025:10:24:03 +0000] "DELETE /connectors/source_mcp_852_trans_request3s HTTP/1.1" 404 81 "-" "axios/1.9.0" 4789 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,148] INFO [source_cdc_signal_heartbeat|task-0] KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=inst_kafka_net_41-schemahistory, bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=inst_kafka_net_41-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:249) [2025-09-29 15:24:08,148] INFO [source_cdc_signal_heartbeat|task-0] KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=inst_kafka_net_41-schemahistory} (io.debezium.storage.kafka.history.KafkaSchemaHistory:250) [2025-09-29 15:24:08,148] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = db-history-config-check (io.debezium.util.Threads:273) [2025-09-29 15:24:08,148] WARN [source_cdc_signal_heartbeat|task-0] Unable to register metrics as an old set with the same name: 'debezium.informix_server:type=connector-metrics,context=schema-history,server=inst_kafka_net_41' exists, retrying in PT5S (attempt 1 out of 12) (io.debezium.pipeline.JmxUtils:55) [2025-09-29 15:24:08,149] INFO 10.11.57.201 - - [29/Sep/2025:10:24:08 +0000] "GET /connectors/source_cdc_signal_heartbeat/status HTTP/1.1" 200 187 "-" "ReactorNetty/1.1.10" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,149] INFO 10.11.57.201 - - [29/Sep/2025:10:24:08 +0000] "GET /connectors/source_cdc_signal_heartbeat/tasks/0/status HTTP/1.1" 200 58 "-" "ReactorNetty/1.1.10" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,156] INFO 10.11.0.20 - - [29/Sep/2025:10:24:08 +0000] "GET /connectors/sink_gpdb_dev_trans_request3s/status HTTP/1.1" 200 187 "-" "axios/1.9.0" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,166] INFO 10.11.0.20 - - [29/Sep/2025:10:24:08 +0000] "GET /connectors/name/status HTTP/1.1" 200 164 "-" "axios/1.9.0" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,179] INFO 10.11.0.20 - - [29/Sep/2025:10:24:08 +0000] "GET /connectors/Broker_1/status HTTP/1.1" 404 69 "-" "axios/1.9.0" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:08,259] INFO 10.11.0.20 - - [29/Sep/2025:10:24:08 +0000] "DELETE /connectors/Broker_1 HTTP/1.1" 404 59 "-" "axios/1.9.0" 0 (org.apache.kafka.connect.runtime.rest.RestServer:62) [2025-09-29 15:24:11,500] INFO [source_cdc_signal_heartbeat|task-0] SignalProcessor stopped (io.debezium.pipeline.signal.SignalProcessor:122) [2025-09-29 15:24:11,501] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-09-29 15:24:11,501] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-09-29 15:24:11,502] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-09-29 15:24:11,502] INFO [source_cdc_signal_heartbeat|task-0] Debezium ServiceRegistry stopped. (io.debezium.service.DefaultServiceRegistry:105) [2025-09-29 15:24:11,502] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-09-29 15:24:11,502] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-09-29 15:24:11,503] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-09-29 15:24:11,503] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-09-29 15:24:11,503] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-09-29 15:24:13,149] WARN [source_cdc_signal_heartbeat|task-0] Unable to register metrics as an old set with the same name: 'debezium.informix_server:type=connector-metrics,context=schema-history,server=inst_kafka_net_41' exists, retrying in PT5S (attempt 2 out of 12) (io.debezium.pipeline.JmxUtils:55) [2025-09-29 15:24:13,403] INFO [sink_gpdb_dev_trans_request3s|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:15,974] INFO [sink_trans_requests|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:15,975] INFO [sink_sumd_card_funds|task-0] Received 0 records (io.confluent.connect.jdbc.sink.JdbcSinkTask:105) [2025-09-29 15:24:16,937] ERROR [source_cdc_signal_heartbeat|task-0] Caught Exception (io.debezium.connector.informix.InformixStreamingChangeEventSource:212) com.informix.stream.impl.IfxStreamException: Unable to end cdc capture at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:422) at com.informix.stream.cdc.IfxCDCEngine.unwatchTable(IfxCDCEngine.java:402) at com.informix.stream.cdc.IfxCDCEngine.close(IfxCDCEngine.java:470) at io.debezium.connector.informix.InformixCdcTransactionEngine.close(InformixCdcTransactionEngine.java:181) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:205) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:37) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:326) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:207) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:147) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF at com.informix.util.IfxErrMsg.buildExceptionWithMessage(IfxErrMsg.java:424) at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:399) at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:381) at com.informix.jdbc.IfxResultSet.getMetaData(IfxResultSet.java:902) at com.informix.jdbc.IfxResultSet.executeQuery(IfxResultSet.java:187) at com.informix.jdbc.IfxStatement.executeQueryImpl(IfxStatement.java:909) at com.informix.jdbc.IfxPreparedStatement.executeQuery(IfxPreparedStatement.java:296) at com.informix.jdbc.IfxCallableStatement.executeQuery(IfxCallableStatement.java:226) at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:413) ... 13 more [2025-09-29 15:24:16,937] ERROR [source_cdc_signal_heartbeat|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:52) com.informix.stream.impl.IfxStreamException: Unable to end cdc capture at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:422) at com.informix.stream.cdc.IfxCDCEngine.unwatchTable(IfxCDCEngine.java:402) at com.informix.stream.cdc.IfxCDCEngine.close(IfxCDCEngine.java:470) at io.debezium.connector.informix.InformixCdcTransactionEngine.close(InformixCdcTransactionEngine.java:181) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:205) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:37) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:326) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:207) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:147) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) Caused by: java.sql.SQLException: ResultSet not open, operation 'next' not permitted. Verify that autocommit is OFF at com.informix.util.IfxErrMsg.buildExceptionWithMessage(IfxErrMsg.java:424) at com.informix.util.IfxErrMsg.buildException(IfxErrMsg.java:399) at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:381) at com.informix.jdbc.IfxResultSet.getMetaData(IfxResultSet.java:902) at com.informix.jdbc.IfxResultSet.executeQuery(IfxResultSet.java:187) at com.informix.jdbc.IfxStatement.executeQueryImpl(IfxStatement.java:909) at com.informix.jdbc.IfxPreparedStatement.executeQuery(IfxPreparedStatement.java:296) at com.informix.jdbc.IfxCallableStatement.executeQuery(IfxCallableStatement.java:226) at com.informix.stream.cdc.IfxCDCEngine.endCapture(IfxCDCEngine.java:413) ... 13 more [2025-09-29 15:24:16,937] ERROR [source_cdc_signal_heartbeat|task-0] The maximum number of 0 retries has been attempted (io.debezium.pipeline.ErrorHandler:129) [2025-09-29 15:24:16,937] INFO [source_cdc_signal_heartbeat|task-0] Finished streaming (io.debezium.pipeline.ChangeEventSourceCoordinator:327) [2025-09-29 15:24:16,937] INFO [source_cdc_signal_heartbeat|task-0] Connected metrics set to 'false' (io.debezium.pipeline.ChangeEventSourceCoordinator:492) [2025-09-29 15:24:16,937] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-09-29 15:24:16,938] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-09-29 15:24:16,939] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for connector-adminclient-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:16,940] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:16,941] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,149] INFO [source_cdc_signal_heartbeat|task-0] ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer (org.apache.kafka.clients.producer.ProducerConfig:372) [2025-09-29 15:24:18,149] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,151] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,151] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,151] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458151 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,151] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,151] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,152] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,153] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,153] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,153] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458153 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,155] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,156] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,157] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,159] INFO [source_cdc_signal_heartbeat|task-0] Found previous partition offset InformixPartition [sourcePartition={databaseName=inst_kafka_net_41}]: {begin_lsn=766131971338264, commit_lsn=766131971338480, change_lsn=766131971338400} (io.debezium.connector.common.BaseSourceTask:576) [2025-09-29 15:24:18,162] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,162] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,163] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,163] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,163] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458163 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,165] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,166] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,167] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,167] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,168] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,169] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,169] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,169] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458169 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,169] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-db-history-config-check (io.debezium.util.Threads:290) [2025-09-29 15:24:18,170] INFO [source_cdc_signal_heartbeat|task-0] AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS (org.apache.kafka.clients.admin.AdminClientConfig:372) [2025-09-29 15:24:18,170] INFO [source_cdc_signal_heartbeat|task-0] These configurations '[enable.idempotence, value.serializer, batch.size, max.in.flight.requests.per.connection, buffer.memory, key.serializer]' were supplied but are not used yet. (org.apache.kafka.clients.admin.AdminClientConfig:381) [2025-09-29 15:24:18,170] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,170] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,170] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458170 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,171] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,173] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,174] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,175] INFO [source_cdc_signal_heartbeat|task-0] Database schema history topic 'cards_1952_schema-history-trans_requests' has correct settings (io.debezium.storage.kafka.history.KafkaSchemaHistory:492) [2025-09-29 15:24:18,175] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for inst_kafka_net_41-schemahistory-topic-check unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,175] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,175] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,175] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,194] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = SignalProcessor (io.debezium.util.Threads:273) [2025-09-29 15:24:18,195] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = change-event-source-coordinator (io.debezium.util.Threads:273) [2025-09-29 15:24:18,195] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component InformixConnector, id = inst_kafka_net_41 named = blocking-snapshot (io.debezium.util.Threads:273) [2025-09-29 15:24:18,195] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-change-event-source-coordinator (io.debezium.util.Threads:290) [2025-09-29 15:24:18,195] ERROR [source_cdc_signal_heartbeat|task-0] WorkerSourceTask{id=source_cdc_signal_heartbeat-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:233) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT where id = 1, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, topic.creation.default.compression.type=lz4, database.user=kafka, database.dbname=cards_1952, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.connection.retries=5, database.hostname=10.11.56.182, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:108) at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:263) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.initializeAndStart(AbstractWorkerSourceTask.java:278) at org.apache.kafka.connect.runtime.WorkerTask.doStart(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:224) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:280) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:78) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-09-29 15:24:18,196] INFO [source_cdc_signal_heartbeat|task-0] Metrics registered (io.debezium.pipeline.ChangeEventSourceCoordinator:137) [2025-09-29 15:24:18,196] INFO [source_cdc_signal_heartbeat|task-0] Context created (io.debezium.pipeline.ChangeEventSourceCoordinator:140) [2025-09-29 15:24:18,196] INFO [source_cdc_signal_heartbeat|task-0] Stopping down connector (io.debezium.connector.common.BaseSourceTask:476) [2025-09-29 15:24:18,196] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,196] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,197] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,197] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,197] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458197 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,200] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,200] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,201] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,201] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,201] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,201] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,201] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,202] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,202] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,202] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,203] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,203] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,203] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458203 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,206] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,207] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,208] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,208] INFO [source_cdc_signal_heartbeat|task-0] Started database schema history recovery (io.debezium.relational.history.SchemaHistoryMetrics:115) [2025-09-29 15:24:18,208] INFO [source_cdc_signal_heartbeat|task-0] ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = inst_kafka_net_41-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = inst_kafka_net_41-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer (org.apache.kafka.clients.consumer.ConsumerConfig:372) [2025-09-29 15:24:18,209] INFO [source_cdc_signal_heartbeat|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:269) [2025-09-29 15:24:18,210] INFO [source_cdc_signal_heartbeat|task-0] Kafka version: 7.8.2-ccs (org.apache.kafka.common.utils.AppInfoParser:124) [2025-09-29 15:24:18,210] INFO [source_cdc_signal_heartbeat|task-0] Kafka commitId: 753ac432ef38a79b7f27781cd77b656d5ffc2e8e (org.apache.kafka.common.utils.AppInfoParser:125) [2025-09-29 15:24:18,210] INFO [source_cdc_signal_heartbeat|task-0] Kafka startTimeMs: 1759141458210 (org.apache.kafka.common.utils.AppInfoParser:126) [2025-09-29 15:24:18,210] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Subscribed to topic(s): cards_1952_schema-history-trans_requests (org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer:476) [2025-09-29 15:24:18,212] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Cluster ID: xrvxrofITEuXCboOsCdMfg (org.apache.kafka.clients.Metadata:364) [2025-09-29 15:24:18,214] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Discovered group coordinator 10.11.57.203:9092 (id: 2147483644 rack: null) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:936) [2025-09-29 15:24:18,215] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:604) [2025-09-29 15:24:18,216] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: need to re-join with the given member-id: inst_kafka_net_41-schemahistory-f0cb1290-3614-4bc0-b90d-23364fd9a00e (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,217] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] (Re-)joining group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:604) [2025-09-29 15:24:18,218] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Successfully joined group with generation Generation{generationId=1, memberId='inst_kafka_net_41-schemahistory-f0cb1290-3614-4bc0-b90d-23364fd9a00e', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:665) [2025-09-29 15:24:18,218] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Finished assignment for group at generation 1: {inst_kafka_net_41-schemahistory-f0cb1290-3614-4bc0-b90d-23364fd9a00e=Assignment(partitions=[cards_1952_schema-history-trans_requests-0])} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:663) [2025-09-29 15:24:18,221] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Successfully synced group in generation Generation{generationId=1, memberId='inst_kafka_net_41-schemahistory-f0cb1290-3614-4bc0-b90d-23364fd9a00e', protocol='range'} (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:842) [2025-09-29 15:24:18,221] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Notifying assignor about the new Assignment(partitions=[cards_1952_schema-history-trans_requests-0]) (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:323) [2025-09-29 15:24:18,221] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Adding newly assigned partitions: cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:57) [2025-09-29 15:24:18,221] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Found no committed offset for partition cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1506) [2025-09-29 15:24:18,222] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting offset for partition cards_1952_schema-history-trans_requests-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.11.57.201:9092 (id: 1 rack: null)], epoch=2}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:398) [2025-09-29 15:24:18,230] INFO [source_cdc_signal_heartbeat|task-0] Database schema history recovery in progress, recovered 1 records (io.debezium.relational.history.SchemaHistoryMetrics:130) [2025-09-29 15:24:18,230] INFO [source_cdc_signal_heartbeat|task-0] Already applied 1 database changes (io.debezium.relational.history.SchemaHistoryMetrics:140) [2025-09-29 15:24:18,839] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Revoke previously assigned partitions cards_1952_schema-history-trans_requests-0 (org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker:79) [2025-09-29 15:24:18,839] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Member inst_kafka_net_41-schemahistory-f0cb1290-3614-4bc0-b90d-23364fd9a00e sending LeaveGroup request to coordinator 10.11.57.203:9092 (id: 2147483644 rack: null) due to the consumer is being closed (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1173) [2025-09-29 15:24:18,839] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1055) [2025-09-29 15:24:18,839] INFO [source_cdc_signal_heartbeat|task-0] [Consumer clientId=inst_kafka_net_41-schemahistory, groupId=inst_kafka_net_41-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:1102) [2025-09-29 15:24:18,841] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,841] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,841] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,841] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,842] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.consumer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,842] INFO [source_cdc_signal_heartbeat|task-0] Finished database schema history recovery of 134 change(s) in 634 ms (io.debezium.relational.history.SchemaHistoryMetrics:121) [2025-09-29 15:24:18,843] INFO [source_cdc_signal_heartbeat|task-0] Parsing default value for column 'ts' with expression 'current' (io.debezium.connector.informix.InformixDefaultValueConverter:50) [2025-09-29 15:24:18,843] ERROR [source_cdc_signal_heartbeat|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:52) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT where id = 1, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, topic.creation.default.compression.type=lz4, database.user=kafka, database.dbname=cards_1952, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.connection.retries=5, database.hostname=10.11.56.182, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:136) at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:132) at io.debezium.relational.HistorizedRelationalDatabaseSchema.recover(HistorizedRelationalDatabaseSchema.java:68) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:143) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-09-29 15:24:18,843] INFO [source_cdc_signal_heartbeat|task-0] Connected metrics set to 'false' (io.debezium.pipeline.ChangeEventSourceCoordinator:492) [2025-09-29 15:24:18,843] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-informixconnector-inst_kafka_net_41-SignalProcessor (io.debezium.util.Threads:290) [2025-09-29 15:24:18,844] INFO [source_cdc_signal_heartbeat|task-0] SignalProcessor stopped (io.debezium.pipeline.signal.SignalProcessor:122) [2025-09-29 15:24:18,844] INFO [source_cdc_signal_heartbeat|task-0] Debezium ServiceRegistry stopped. (io.debezium.service.DefaultServiceRegistry:105) [2025-09-29 15:24:18,844] INFO [source_cdc_signal_heartbeat|task-0] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273) [2025-09-29 15:24:18,844] INFO [source_cdc_signal_heartbeat|task-0] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290) [2025-09-29 15:24:18,845] INFO [source_cdc_signal_heartbeat|task-0] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988) [2025-09-29 15:24:18,845] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=inst_kafka_net_41-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-09-29 15:24:18,846] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,846] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,846] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,847] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,847] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for inst_kafka_net_41-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,847] WARN [source_cdc_signal_heartbeat|task-0] Failed to close source task with type org.apache.kafka.connect.runtime.AbstractWorkerSourceTask$$Lambda$1870/0x00007f9d40e0ec50 (org.apache.kafka.common.utils.Utils:1119) java.lang.IllegalStateException: DebeziumOpenLineageEmitter not initialized for connector ConnectorContext[connectorLogicalName=inst_kafka_net_41, connectorName=informix, taskId=0, version=3.2.3.Final, config={connector.class=io.debezium.connector.informix.InformixConnector, errors.log.include.messages=true, topic.creation.default.partitions=1, value.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, key.converter.schema.registry.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy, transforms=unwrap, errors.deadletterqueue.context.headers.enable=true, heartbeat.action.query=UPDATE cdc_signal_heartbeat SET ts = CURRENT where id = 1, transforms.unwrap.drop.tombstones=false, topic.creation.default.replication.factor=3, errors.deadletterqueue.topic.replication.factor=3, transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState, errors.log.enable=true, key.converter=io.confluent.connect.avro.AvroConverter, topic.creation.default.compression.type=lz4, database.user=kafka, database.dbname=cards_1952, column.skip.list=cards_1952.mcp.ach_accounts.ivr_ach_act_nick,cards_1952.mcp.alert_executed.alert_msg,cards_1952.mcp.alert_executed.alert_template,cards_1952.mcp.alert_executed.alert_data,cards_1952.mcp.alert_executed.description,cards_1952.mcp.campaign_insts.ivr_message,cards_1952.mcp.campaign_insts.push_message,cards_1952.mcp.maa_sent_msg_log.message,cards_1952.mcp.merchants.merchant_image,cards_1952.mcp.stake_holders.stake_holder_logo,cards_1952.mcp.stake_holders.stake_holder_thumbnail,cards_1952.mcp.file_store_binary.file_data,cards_1952.mcp.push_notify_comm_logs.req_payload,cards_1952.mcp.web_activity_log.request_parameters, heartbeat.interval.ms=1800000, schema.history.internal.kafka.bootstrap.servers=10.11.57.201:9092, 10.11.57.203:9092, 10.11.57.202:9092, value.converter.schema.registry.url=http://10.11.57.201:8081, schema.history.internal.kafka.topic.replication.factor=3, errors.max.retries=0, errors.deadletterqueue.topic.name=informix-gpdb-source-errors, database.password=Lahore@556677, name=source_cdc_signal_heartbeat, errors.tolerance=none, skipped.operations=d, pk.mode=kafka, snapshot.mode=schema_only, max.queue.size=100000, tasks.max=1, retriable.restart.connector.wait.ms=60000, database.connection.retry.interval.ms=1000, schema.history.internal.store.only.captured.databases.ddl=true, schema.history.internal.store.only.captured.tables.ddl=true, tombstones.on.delete=true, topic.prefix=inst_kafka_net_41, decimal.handling.mode=double, schema.history.internal.kafka.topic=cards_1952_schema-history-trans_requests, connection.pool.max.size=50, value.converter=io.confluent.connect.avro.AvroConverter, openlineage.integration.enabled=false, topic.creation.default.cleanup.policy=compact, time.precision.mode=connect, database.server.name=inst_kafka_net_41, snapshot.isolation.mode=read_committed, topic.creation.default.retention.ms=604800000, database.port=9260, schema.history.internal.kafka.recovery.poll.interval.ms=30000, offset.flush.interval.ms=10000, task.class=io.debezium.connector.informix.InformixConnectorTask, database.connection.retries=5, database.hostname=10.11.56.182, table.include.list=cards_1952.mcp.cdc_signal_heartbeat, key.converter.schema.registry.url=http://10.11.57.201:8081}]. Call init() first. at io.debezium.openlineage.DebeziumOpenLineageEmitter.getEmitter(DebeziumOpenLineageEmitter.java:158) at io.debezium.openlineage.DebeziumOpenLineageEmitter.emit(DebeziumOpenLineageEmitter.java:108) at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:501) at io.debezium.connector.common.BaseSourceTask.stop(BaseSourceTask.java:464) at org.apache.kafka.common.utils.Utils.closeQuietly(Utils.java:1117) at org.apache.kafka.common.utils.Utils.closeQuietly(Utils.java:1100) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.close(AbstractWorkerSourceTask.java:312) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:202) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:237) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:280) at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:78) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) [2025-09-29 15:24:18,847] INFO [source_cdc_signal_heartbeat|task-0] [Producer clientId=connector-producer-source_cdc_signal_heartbeat-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1373) [2025-09-29 15:24:18,848] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,848] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,848] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,848] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) [2025-09-29 15:24:18,849] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.producer for connector-producer-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,849] INFO [source_cdc_signal_heartbeat|task-0] App info kafka.admin.client for connector-adminclient-source_cdc_signal_heartbeat-0 unregistered (org.apache.kafka.common.utils.AppInfoParser:88) [2025-09-29 15:24:18,849] INFO [source_cdc_signal_heartbeat|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:684) [2025-09-29 15:24:18,849] INFO [source_cdc_signal_heartbeat|task-0] Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics:688) [2025-09-29 15:24:18,849] INFO [source_cdc_signal_heartbeat|task-0] Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:694) -sh-5.1$