Using BOOTSTRAP_SERVERS=kafka:9092 Plugins are loaded from /kafka/connect Using the following environment variables: GROUP_ID=1 CONFIG_STORAGE_TOPIC=my_connect_configs OFFSET_STORAGE_TOPIC=my_connect_offsets STATUS_STORAGE_TOPIC=my_connect_statuses BOOTSTRAP_SERVERS=kafka:9092 REST_HOST_NAME=172.18.0.5 REST_PORT=8083 ADVERTISED_HOST_NAME=172.18.0.5 ADVERTISED_PORT=8083 KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter OFFSET_FLUSH_INTERVAL_MS=60000 OFFSET_FLUSH_TIMEOUT_MS=5000 SHUTDOWN_TIMEOUT=10000 --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=my_connect_offsets --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=my_connect_configs --- Setting property from CONNECT_GROUP_ID: group.id=1 --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=172.18.0.5 --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=172.18.0.5 --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_REST_PORT: rest.port=8083 --- Setting property from CONNECT_STATUS_STORAGE_TOPIC: status.storage.topic=my_connect_statuses --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=kafka:9092 --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 2025-06-26 07:43:38,231 INFO || Kafka Connect worker initializing ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-26 07:43:38,233 INFO || WorkerInfo values: jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote=true, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 21.0.7, 21.0.7+6 jvm.classpath = /kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.12.0.jar:/kafka/libs/caffeine-2.9.3.jar:/kafka/libs/commons-beanutils-1.9.4.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-collections-3.2.2.jar:/kafka/libs/commons-digester-2.1.jar:/kafka/libs/commons-io-2.14.0.jar:/kafka/libs/commons-lang3-3.12.0.jar:/kafka/libs/commons-logging-1.2.jar:/kafka/libs/commons-validator-1.7.jar:/kafka/libs/connect-api-3.9.0.jar:/kafka/libs/connect-basic-auth-extension-3.9.0.jar:/kafka/libs/connect-json-3.9.0.jar:/kafka/libs/connect-mirror-3.9.0.jar:/kafka/libs/connect-mirror-client-3.9.0.jar:/kafka/libs/connect-runtime-3.9.0.jar:/kafka/libs/connect-transforms-3.9.0.jar:/kafka/libs/error_prone_annotations-2.10.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.16.2.jar:/kafka/libs/jackson-core-2.16.2.jar:/kafka/libs/jackson-databind-2.16.2.jar:/kafka/libs/jackson-dataformat-csv-2.16.2.jar:/kafka/libs/jackson-datatype-jdk8-2.16.2.jar:/kafka/libs/jackson-jaxrs-base-2.16.2.jar:/kafka/libs/jackson-jaxrs-json-provider-2.16.2.jar:/kafka/libs/jackson-module-afterburner-2.16.2.jar:/kafka/libs/jackson-module-jaxb-annotations-2.16.2.jar:/kafka/libs/jackson-module-scala_2.13-2.16.2.jar:/kafka/libs/jakarta.activation-api-1.2.2.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.3.jar:/kafka/libs/javassist-3.29.2-GA.jar:/kafka/libs/javax.activation-api-1.2.0.jar:/kafka/libs/javax.annotation-api-1.3.2.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.1.jar:/kafka/libs/jersey-client-2.39.1.jar:/kafka/libs/jersey-common-2.39.1.jar:/kafka/libs/jersey-container-servlet-2.39.1.jar:/kafka/libs/jersey-container-servlet-core-2.39.1.jar:/kafka/libs/jersey-hk2-2.39.1.jar:/kafka/libs/jersey-server-2.39.1.jar:/kafka/libs/jetty-client-9.4.56.v20240826.jar:/kafka/libs/jetty-continuation-9.4.56.v20240826.jar:/kafka/libs/jetty-http-9.4.56.v20240826.jar:/kafka/libs/jetty-io-9.4.56.v20240826.jar:/kafka/libs/jetty-security-9.4.56.v20240826.jar:/kafka/libs/jetty-server-9.4.56.v20240826.jar:/kafka/libs/jetty-servlet-9.4.56.v20240826.jar:/kafka/libs/jetty-servlets-9.4.56.v20240826.jar:/kafka/libs/jetty-util-9.4.56.v20240826.jar:/kafka/libs/jetty-util-ajax-9.4.56.v20240826.jar:/kafka/libs/jline-3.25.1.jar:/kafka/libs/jolokia-jvm-1.7.2.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.9.4.jar:/kafka/libs/jsr305-3.0.2.jar:/kafka/libs/kafka-clients-3.9.0.jar:/kafka/libs/kafka-group-coordinator-3.9.0.jar:/kafka/libs/kafka-group-coordinator-api-3.9.0.jar:/kafka/libs/kafka-metadata-3.9.0.jar:/kafka/libs/kafka-raft-3.9.0.jar:/kafka/libs/kafka-server-3.9.0.jar:/kafka/libs/kafka-server-common-3.9.0.jar:/kafka/libs/kafka-shell-3.9.0.jar:/kafka/libs/kafka-storage-3.9.0.jar:/kafka/libs/kafka-storage-api-3.9.0.jar:/kafka/libs/kafka-streams-3.9.0.jar:/kafka/libs/kafka-streams-examples-3.9.0.jar:/kafka/libs/kafka-streams-scala_2.13-3.9.0.jar:/kafka/libs/kafka-streams-test-utils-3.9.0.jar:/kafka/libs/kafka-tools-3.9.0.jar:/kafka/libs/kafka-tools-api-3.9.0.jar:/kafka/libs/kafka-transaction-coordinator-3.9.0.jar:/kafka/libs/kafka_2.13-3.9.0.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.9.6.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.111.Final.jar:/kafka/libs/netty-codec-4.1.111.Final.jar:/kafka/libs/netty-common-4.1.111.Final.jar:/kafka/libs/netty-handler-4.1.111.Final.jar:/kafka/libs/netty-resolver-4.1.111.Final.jar:/kafka/libs/netty-transport-4.1.111.Final.jar:/kafka/libs/netty-transport-classes-epoll-4.1.111.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.111.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.111.Final.jar:/kafka/libs/opentelemetry-proto-1.0.0-alpha.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/pcollections-4.0.1.jar:/kafka/libs/plexus-utils-3.5.1.jar:/kafka/libs/protobuf-java-3.25.5.jar:/kafka/libs/reflections-0.10.2.jar:/kafka/libs/reload4j-1.2.25.jar:/kafka/libs/rocksdbjni-7.9.2.jar:/kafka/libs/scala-collection-compat_2.13-2.10.0.jar:/kafka/libs/scala-java8-compat_2.13-1.0.2.jar:/kafka/libs/scala-library-2.13.14.jar:/kafka/libs/scala-logging_2.13-3.9.5.jar:/kafka/libs/scala-reflect-2.13.14.jar:/kafka/libs/slf4j-api-1.7.36.jar:/kafka/libs/slf4j-reload4j-1.7.36.jar:/kafka/libs/snappy-java-1.1.10.5.jar:/kafka/libs/swagger-annotations-2.2.8.jar:/kafka/libs/trogdor-3.9.0.jar:/kafka/libs/zookeeper-3.8.4.jar:/kafka/libs/zookeeper-jute-3.8.4.jar:/kafka/libs/zstd-jni-1.5.6-4.jar os.spec = Linux, aarch64, 6.10.14-linuxkit os.vcpus = 10 [org.apache.kafka.connect.runtime.WorkerInfo] 2025-06-26 07:43:38,233 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-26 07:43:38,253 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,285 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,402 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,403 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,419 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,443 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,467 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,471 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,490 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,490 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,496 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,516 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,518 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,551 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,571 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,571 INFO || Loading plugin from: /kafka/connect/debezium-connector-mariadb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,582 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,601 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mariadb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,603 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,613 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,630 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,689 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,696 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,717 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,717 INFO || Loading plugin from: /kafka/connect/debezium-connector-ibmi [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,722 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,737 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-ibmi/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,744 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,753 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,781 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,785 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,790 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,818 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,819 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,828 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-26 07:43:38,847 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,892 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,896 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@5a07e868 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,896 INFO || Scanning plugins with ServiceLoaderScanner took 643 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:38,897 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,058 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,058 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,592 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,592 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,631 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,631 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,742 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:39,742 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,295 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,295 INFO || Loading plugin from: /kafka/connect/debezium-connector-mariadb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,403 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mariadb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,403 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,490 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,494 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,793 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,793 INFO || Loading plugin from: /kafka/connect/debezium-connector-ibmi [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,916 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-ibmi/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:40,916 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,028 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,029 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,101 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,102 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,484 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:41,488 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:42,000 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@5a07e868 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:42,000 INFO || Scanning plugins with ReflectionScanner took 3103 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-26 07:43:42,002 WARN || One or more plugins are missing ServiceLoader manifests may not be usable with plugin.discovery=service_load: [ file:/kafka/connect/debezium-connector-mongodb/ io.debezium.connector.mongodb.MongoDbSinkConnector sink 3.1.3.Final file:/kafka/connect/debezium-connector-postgres/ io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent transformation 3.1.3.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords transformation 3.1.3.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.RemoveField transformation 3.1.3.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.ReplaceFieldValue transformation 3.1.3.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.UseLocalVgtid transformation 3.1.3.Final file:/kafka/connect/debezium-connector-db2/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-ibmi/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-informix/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-jdbc/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-mariadb/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-mongodb/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-mysql/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-oracle/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-postgres/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-spanner/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-sqlserver/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.3.Final ] Read the documentation at https://kafka.apache.org/documentation.html#connect_plugindiscovery for instructions on migrating your plugins to take advantage of the performance improvements of service_load mode. To silence this warning, set plugin.discovery=only_scan in the worker config. [org.apache.kafka.connect.runtime.isolation.Plugins] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.db2as400.smt.RepackageJavaFriendlySchemaRenamer' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.connector.vitess.transforms.UseLocalVgtid' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,003 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.vitess.transforms.ReplaceFieldValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.VectorToJsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,004 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,005 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,005 INFO || Added plugin 'io.debezium.connector.vitess.transforms.RemoveField' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,005 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,005 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,005 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'VitessConnector' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'As400RpcConnector' to plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'VectorToJsonConverter' to plugin 'io.debezium.transforms.VectorToJsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'MirrorCheckpointConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'HeaderToValue' to plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'RepackageJavaFriendlySchemaRenamer' to plugin 'io.debezium.connector.db2as400.smt.RepackageJavaFriendlySchemaRenamer' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'RemoveField' to plugin 'io.debezium.connector.vitess.transforms.RemoveField' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'SqlServerConnector' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'DirectoryConfigProvider' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'TimezoneConverter' to plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'AllConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'Directory' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'BooleanConverter' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'JsonConverter' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,006 INFO || Added alias 'FilterTransactionTopicRecords' to plugin 'io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'JdbcSinkConnector' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ReplaceFieldValue' to plugin 'io.debezium.connector.vitess.transforms.ReplaceFieldValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'SpannerConnector' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MongoDbSinkConnector' to plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'DecodeLogicalDecodingMessageContent' to plugin 'io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'FileConfigProvider' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'SchemaChangeEventFilter' to plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ConvertCloudEventToSaveableForm' to plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'FloatConverter' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Spanner' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MariaDb' to plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'UseLocalVgtid' to plugin 'io.debezium.connector.vitess.transforms.UseLocalVgtid' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MirrorHeartbeatConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'PrincipalConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Informix' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'ExtractNewDocumentState' to plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'CloudEventsConverter' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'EnvVar' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'EnvVarConfigProvider' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Boolean' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MySqlConnector' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MariaDbConnector' to plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'PartitionRouting' to plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'MongoDbSink' to plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,007 INFO || Added alias 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'StringConverter' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'MongoDbConnector' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'IntegerConverter' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'LongConverter' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'ExtractSchemaToNewRecord' to plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'BinaryData' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'ShortConverter' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'Db2' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'Db2Connector' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'Vitess' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'InformixConnector' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'ExtractChangedRecordState' to plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'OracleConnector' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'JdbcSink' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'NoneConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'File' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'DoubleConverter' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'BinaryDataConverter' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'TimescaleDb' to plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'MirrorSourceConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'PostgresConnector' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'MongoEventRouter' to plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,008 INFO || Added alias 'As400Rpc' to plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-26 07:43:42,026 INFO || DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = my_connect_configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = All exactly.once.source.support = disabled group.id = 1 header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 60000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = my_connect_offsets plugin.discovery = hybrid_warn plugin.path = [/kafka/connect] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 response.http.headers.config = rest.advertised.host.name = 172.18.0.5 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = my_connect_statuses task.shutdown.graceful.timeout.ms = 10000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 [org.apache.kafka.connect.runtime.distributed.DistributedConfig] 2025-06-26 07:43:42,027 INFO || Creating Kafka admin client [org.apache.kafka.connect.runtime.WorkerConfig] 2025-06-26 07:43:42,028 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:43:42,062 INFO || These configurations '[config.storage.topic, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:43:42,062 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,062 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,062 INFO || Kafka startTimeMs: 1750923822062 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,240 INFO || Kafka cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.connect.runtime.WorkerConfig] 2025-06-26 07:43:42,240 INFO || App info kafka.admin.client for adminclient-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,243 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:43:42,243 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:43:42,243 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:43:42,246 INFO || PublicConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null listeners = [http://:8083] response.http.headers.config = rest.advertised.host.name = 172.18.0.5 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS topic.tracking.allow.reset = true topic.tracking.enable = true [org.apache.kafka.connect.runtime.rest.RestServerConfig$PublicConfig] 2025-06-26 07:43:42,251 INFO || Logging initialized @4291ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] 2025-06-26 07:43:42,268 INFO || Added connector for http://:8083 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,269 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,281 INFO || jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 21.0.7+6 [org.eclipse.jetty.server.Server] 2025-06-26 07:43:42,293 INFO || Started http_8083@5624f7a{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} [org.eclipse.jetty.server.AbstractConnector] 2025-06-26 07:43:42,293 INFO || Started @4333ms [org.eclipse.jetty.server.Server] 2025-06-26 07:43:42,301 INFO || Advertised URI: http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,301 INFO || REST server listening at http://172.18.0.5:8083/, advertising URL http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,301 INFO || Advertised URI: http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,301 INFO || REST admin endpoints at http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,301 INFO || Advertised URI: http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,301 INFO || Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden [org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy] 2025-06-26 07:43:42,304 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-26 07:43:42,311 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,311 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,311 INFO || Kafka startTimeMs: 1750923822311 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,314 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-26 07:43:42,314 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-26 07:43:42,322 INFO || Advertised URI: http://172.18.0.5:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,335 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,335 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,335 INFO || Kafka startTimeMs: 1750923822335 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,337 INFO || Kafka Connect worker initialization took 4105ms [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-26 07:43:42,337 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] 2025-06-26 07:43:42,338 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,338 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:43:42,339 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:43:42,339 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2025-06-26 07:43:42,339 INFO || Starting KafkaBasedLog with topic my_connect_offsets reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,340 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = 1-shared-admin connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:43:42,341 INFO || These configurations '[config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:43:42,341 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,341 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,341 INFO || Kafka startTimeMs: 1750923822341 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,354 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,374 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] 2025-06-26 07:43:42,374 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] 2025-06-26 07:43:42,374 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] 2025-06-26 07:43:42,572 INFO || Started o.e.j.s.ServletContextHandler@a058884{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] 2025-06-26 07:43:42,572 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:43:42,572 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] 2025-06-26 07:43:42,654 INFO || Created topic (name=my_connect_offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-26 07:43:42,658 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-offsets compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,671 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,681 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,681 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,681 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,681 INFO || Kafka startTimeMs: 1750923822681 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,685 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-offsets client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,688 INFO || [Producer clientId=1-offsets] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,692 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,706 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,706 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,706 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,706 INFO || Kafka startTimeMs: 1750923822706 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,711 INFO || [Consumer clientId=1-offsets, groupId=1] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,715 INFO || [Consumer clientId=1-offsets, groupId=1] Assigned to partition(s): my_connect_offsets-0, my_connect_offsets-5, my_connect_offsets-10, my_connect_offsets-20, my_connect_offsets-15, my_connect_offsets-9, my_connect_offsets-11, my_connect_offsets-4, my_connect_offsets-16, my_connect_offsets-17, my_connect_offsets-3, my_connect_offsets-24, my_connect_offsets-23, my_connect_offsets-13, my_connect_offsets-18, my_connect_offsets-22, my_connect_offsets-2, my_connect_offsets-8, my_connect_offsets-12, my_connect_offsets-19, my_connect_offsets-14, my_connect_offsets-1, my_connect_offsets-6, my_connect_offsets-7, my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-26 07:43:42,716 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,717 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,738 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,738 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,738 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,738 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,738 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,739 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,740 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,740 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,740 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2025-06-26 07:43:42,740 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:43:42,740 INFO || Starting KafkaBasedLog with topic my_connect_statuses reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,793 INFO || Created topic (name=my_connect_statuses, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-26 07:43:42,793 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-statuses compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,793 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,796 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,796 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,796 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,796 INFO || Kafka startTimeMs: 1750923822796 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,797 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-statuses client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,797 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,799 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,799 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,799 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,799 INFO || Kafka startTimeMs: 1750923822799 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,800 INFO || [Producer clientId=1-statuses] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,803 INFO || [Consumer clientId=1-statuses, groupId=1] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Assigned to partition(s): my_connect_statuses-0, my_connect_statuses-1, my_connect_statuses-4, my_connect_statuses-2, my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,804 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,811 INFO || Finished reading KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,812 INFO || Started KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,813 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-26 07:43:42,813 INFO || Starting KafkaBasedLog with topic my_connect_configs reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,839 INFO || Created topic (name=my_connect_configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-26 07:43:42,839 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-configs compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,840 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,842 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:43:42,842 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,842 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,842 INFO || Kafka startTimeMs: 1750923822842 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,842 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-configs client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,843 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:43:42,844 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:43:42,844 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,844 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,844 INFO || Kafka startTimeMs: 1750923822844 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:43:42,845 INFO || [Producer clientId=1-configs] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,847 INFO || [Consumer clientId=1-configs, groupId=1] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:42,848 INFO || [Consumer clientId=1-configs, groupId=1] Assigned to partition(s): my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-26 07:43:42,848 INFO || [Consumer clientId=1-configs, groupId=1] Seeking to earliest offset of partition my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,854 INFO || [Consumer clientId=1-configs, groupId=1] Resetting offset for partition my_connect_configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[172.18.0.4:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-26 07:43:42,854 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,854 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-26 07:43:42,854 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-26 07:43:42,859 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:43:43,623 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Discovered group coordinator 172.18.0.4:9092 (id: 2147483646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,624 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,624 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,633 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,638 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully joined group with generation Generation{generationId=1, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,660 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully synced group in generation Generation{generationId=1, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:43:43,660 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', leaderUrl='http://172.18.0.5:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:43:43,660 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:43:43,660 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Starting connectors and tasks using config offset -1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:43:43,661 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:43:43,693 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,885 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.informix.InformixSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-26 07:50:17,951 INFO || Successfully tested connection for jdbc:informix-sqli://ifxserver:9088/sysuser:user=informix;password=in4mix with user 'informix' [io.debezium.connector.informix.InformixConnector] 2025-06-26 07:50:17,953 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-26 07:50:17,954 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2025-06-26 07:50:17,961 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Connector inventory-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,962 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,962 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,964 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully joined group with generation Generation{generationId=2, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,974 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully synced group in generation Generation{generationId=2, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,974 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', leaderUrl='http://172.18.0.5:8083/', offset=2, connectorIds=[inventory-connector], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,974 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Starting connectors and tasks using config offset 2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,974 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Starting connector inventory-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,976 INFO || Creating connector inventory-connector of type io.debezium.connector.informix.InformixConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:17,976 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-26 07:50:17,976 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-26 07:50:17,979 INFO || Instantiated connector inventory-connector with version 3.1.3.Final of type class io.debezium.connector.informix.InformixConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:17,979 INFO || Finished creating connector inventory-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:17,980 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,984 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-26 07:50:17,984 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-26 07:50:17,988 INFO || 192.168.65.1 - - [26/Jun/2025:07:50:17 +0000] "POST /connectors/ HTTP/1.1" 201 476 "-" "curl/8.7.1" 175 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-26 07:50:17,995 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Tasks [inventory-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:17,996 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,996 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,997 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully joined group with generation Generation{generationId=3, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:17,999 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Successfully synced group in generation Generation{generationId=3, memberId='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-26 07:50:18,000 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Joined group at generation 3 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-172.18.0.5:8083-efd2c920-9435-447d-871d-9ad901c5fc1f', leaderUrl='http://172.18.0.5:8083/', offset=4, connectorIds=[inventory-connector], taskIds=[inventory-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:18,000 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Starting connectors and tasks using config offset 4 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:18,001 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Starting task inventory-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:18,003 INFO || Creating task inventory-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,004 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = inventory-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-26 07:50:18,004 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = inventory-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-26 07:50:18,005 INFO || TaskConfig values: task.class = class io.debezium.connector.informix.InformixConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-26 07:50:18,006 INFO || Instantiated task inventory-connector-0 with version 3.1.3.Final of type io.debezium.connector.informix.InformixConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,007 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-26 07:50:18,007 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task inventory-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,007 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-26 07:50:18,007 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task inventory-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,007 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task inventory-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,009 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-26 07:50:18,009 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-26 07:50:18,009 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.informix.InformixConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = inventory-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-26 07:50:18,009 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-inventory-connector-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:50:18,009 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:50:18,011 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:50:18,011 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,011 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,011 INFO || Kafka startTimeMs: 1750924218011 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,014 INFO || [Producer clientId=connector-producer-inventory-connector-0] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:50:18,016 INFO || [Worker clientId=connect-172.18.0.5:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-26 07:50:18,017 INFO || Starting InformixConnectorTask with configuration: connector.class = io.debezium.connector.informix.InformixConnector database.user = informix database.dbname = sysuser topic.prefix = ifxserver schema.history.internal.kafka.topic = schema-changes.inventory task.class = io.debezium.connector.informix.InformixConnectorTask tasks.max = 1 database.hostname = ifxserver database.password = ******** name = inventory-connector schema.history.internal.kafka.bootstrap.servers = kafka:9092 database.port = 9088 [io.debezium.connector.common.BaseSourceTask] 2025-06-26 07:50:18,017 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.informix.InformixSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-26 07:50:18,017 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.SchemaTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2025-06-26 07:50:18,053 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=ifxserver-schemahistory, bootstrap.servers=kafka:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=ifxserver-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-26 07:50:18,053 INFO || KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, bootstrap.servers=kafka:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=ifxserver-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-26 07:50:18,053 INFO || Requested thread factory for component InformixConnector, id = ifxserver named = db-history-config-check [io.debezium.util.Threads] 2025-06-26 07:50:18,055 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [kafka:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = ifxserver-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-26 07:50:18,055 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:50:18,056 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,056 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,056 INFO || Kafka startTimeMs: 1750924218056 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,056 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = ifxserver-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = ifxserver-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:50:18,057 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:50:18,058 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,058 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,058 INFO || Kafka startTimeMs: 1750924218058 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,058 INFO || [Producer clientId=ifxserver-schemahistory] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:50:18,060 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:50:18,062 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-26 07:50:18,062 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-26 07:50:18,063 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,063 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,063 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,063 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,064 INFO || App info kafka.consumer for ifxserver-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,065 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = ifxserver-schemahistory connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:50:18,066 INFO || These configurations '[enable.idempotence, value.serializer, batch.size, max.in.flight.requests.per.connection, buffer.memory, key.serializer]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-26 07:50:18,066 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,066 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,066 INFO || Kafka startTimeMs: 1750924218066 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,094 INFO || Database schema history topic '(name=schema-changes.inventory, numPartitions=1, replicationFactor=default, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-26 07:50:18,094 INFO || App info kafka.admin.client for ifxserver-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,095 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,095 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,095 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,100 INFO || No previous offsets found [io.debezium.connector.common.BaseSourceTask] 2025-06-26 07:50:18,106 INFO || Connector started for the first time. [io.debezium.connector.common.BaseSourceTask] 2025-06-26 07:50:18,106 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = ifxserver-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = ifxserver-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-26 07:50:18,106 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-26 07:50:18,108 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,108 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,108 INFO || Kafka startTimeMs: 1750924218107 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,110 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Cluster ID: BYbX4NyuSGKqhhz9XW4SOg [org.apache.kafka.clients.Metadata] 2025-06-26 07:50:18,111 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-26 07:50:18,111 INFO || [Consumer clientId=ifxserver-schemahistory, groupId=ifxserver-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-26 07:50:18,111 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,111 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,111 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,111 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-26 07:50:18,112 INFO || App info kafka.consumer for ifxserver-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-26 07:50:18,116 INFO || Requested thread factory for component InformixConnector, id = ifxserver named = SignalProcessor [io.debezium.util.Threads] 2025-06-26 07:50:18,122 INFO || Requested thread factory for component InformixConnector, id = ifxserver named = change-event-source-coordinator [io.debezium.util.Threads] 2025-06-26 07:50:18,122 INFO || Requested thread factory for component InformixConnector, id = ifxserver named = blocking-snapshot [io.debezium.util.Threads] 2025-06-26 07:50:18,124 INFO || Creating thread debezium-informixconnector-ifxserver-change-event-source-coordinator [io.debezium.util.Threads] 2025-06-26 07:50:18,124 INFO || WorkerSourceTask{id=inventory-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2025-06-26 07:50:18,125 INFO Informix_Server|ifxserver|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-26 07:50:18,125 INFO Informix_Server|ifxserver|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-26 07:50:18,128 INFO Informix_Server|ifxserver|snapshot According to the connector configuration both schema and data will be snapshot. [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,129 INFO Informix_Server|ifxserver|snapshot Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,130 INFO Informix_Server|ifxserver|snapshot Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,150 INFO Informix_Server|ifxserver|snapshot Adding table sysuser.informix.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,150 INFO Informix_Server|ifxserver|snapshot Adding table sysuser.informix.table_default_test to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,150 INFO Informix_Server|ifxserver|snapshot Adding table sysuser.informix.orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,150 INFO Informix_Server|ifxserver|snapshot Adding table sysuser.informix.products to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,150 INFO Informix_Server|ifxserver|snapshot Adding table sysuser.informix.products_on_hand to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,151 INFO Informix_Server|ifxserver|snapshot Created connection pool with 1 threads [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,151 INFO Informix_Server|ifxserver|snapshot Snapshot step 3 - Locking captured tables [sysuser.informix.customers, sysuser.informix.orders, sysuser.informix.products, sysuser.informix.products_on_hand, sysuser.informix.table_default_test] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,153 INFO Informix_Server|ifxserver|snapshot Executing schema locking [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,153 INFO Informix_Server|ifxserver|snapshot Locking table sysuser.informix.customers [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,154 INFO Informix_Server|ifxserver|snapshot Locking table sysuser.informix.orders [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,155 INFO Informix_Server|ifxserver|snapshot Locking table sysuser.informix.products [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,155 INFO Informix_Server|ifxserver|snapshot Locking table sysuser.informix.products_on_hand [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,156 INFO Informix_Server|ifxserver|snapshot Locking table sysuser.informix.table_default_test [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,156 INFO Informix_Server|ifxserver|snapshot Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,161 INFO Informix_Server|ifxserver|snapshot Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,161 INFO Informix_Server|ifxserver|snapshot Reading structure of schema 'informix' [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,687 INFO Informix_Server|ifxserver|snapshot Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,688 INFO Informix_Server|ifxserver|snapshot Capturing structure of table sysuser.informix.customers [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,703 INFO Informix_Server|ifxserver|snapshot Capturing structure of table sysuser.informix.orders [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,706 INFO Informix_Server|ifxserver|snapshot Capturing structure of table sysuser.informix.products [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,709 INFO Informix_Server|ifxserver|snapshot Capturing structure of table sysuser.informix.products_on_hand [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,711 INFO Informix_Server|ifxserver|snapshot Capturing structure of table sysuser.informix.table_default_test [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,711 INFO Informix_Server|ifxserver|snapshot Parsing default value for column 'test_col' with expression '0,0000000000' [io.debezium.connector.informix.InformixDefaultValueConverter] 2025-06-26 07:50:18,712 WARN Informix_Server|ifxserver|snapshot Cannot parse column default value '0,0000000000' to type '3'. Expression evaluation is not supported. [io.debezium.connector.informix.InformixDefaultValueConverter] java.lang.NumberFormatException: Character , is neither a decimal digit number, decimal point, nor "e" notation exponential mark. at java.base/java.math.BigDecimal.(BigDecimal.java:608) at java.base/java.math.BigDecimal.(BigDecimal.java:497) at java.base/java.math.BigDecimal.(BigDecimal.java:903) at io.debezium.connector.informix.InformixDefaultValueConverter.lambda$numericDefaultValueMapper$5(InformixDefaultValueConverter.java:158) at io.debezium.connector.informix.InformixDefaultValueConverter.lambda$nullableDefaultValueMapper$3(InformixDefaultValueConverter.java:138) at io.debezium.connector.informix.InformixDefaultValueConverter.parseDefaultValue(InformixDefaultValueConverter.java:59) at io.debezium.relational.TableSchemaBuilder.lambda$parseDefaultValue$9(TableSchemaBuilder.java:505) at java.base/java.util.Optional.flatMap(Optional.java:289) at io.debezium.relational.TableSchemaBuilder.parseDefaultValue(TableSchemaBuilder.java:505) at io.debezium.relational.TableSchemaBuilder.addField(TableSchemaBuilder.java:439) at io.debezium.relational.TableSchemaBuilder.lambda$create$2(TableSchemaBuilder.java:200) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at io.debezium.relational.TableSchemaBuilder.create(TableSchemaBuilder.java:198) at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:122) at io.debezium.connector.informix.InformixDatabaseSchema.applySchemaChange(InformixDatabaseSchema.java:59) at io.debezium.pipeline.EventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(EventDispatcher.java:696) at io.debezium.relational.RelationalSnapshotChangeEventSource.lambda$createSchemaChangeEventsForTables$3(RelationalSnapshotChangeEventSource.java:451) at io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:402) at io.debezium.relational.RelationalSnapshotChangeEventSource.createSchemaChangeEventsForTables(RelationalSnapshotChangeEventSource.java:449) at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:168) at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:96) at io.debezium.pipeline.ChangeEventSourceCoordinator.doSnapshot(ChangeEventSourceCoordinator.java:294) at io.debezium.pipeline.ChangeEventSourceCoordinator.doSnapshot(ChangeEventSourceCoordinator.java:278) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:192) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:143) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) 2025-06-26 07:50:18,717 INFO Informix_Server|ifxserver|snapshot Schema locks released. [io.debezium.connector.informix.InformixSnapshotChangeEventSource] 2025-06-26 07:50:18,717 INFO Informix_Server|ifxserver|snapshot Snapshot step 7 - Snapshotting data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,717 INFO Informix_Server|ifxserver|snapshot Creating snapshot worker pool with 1 worker thread(s) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,718 INFO Informix_Server|ifxserver|snapshot For table 'sysuser.informix.customers' using select statement: 'SELECT id, first_name, last_name, email FROM informix.customers' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,718 INFO Informix_Server|ifxserver|snapshot For table 'sysuser.informix.orders' using select statement: 'SELECT id, order_date, purchaser, quantity, product_id FROM informix.orders' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,718 INFO Informix_Server|ifxserver|snapshot For table 'sysuser.informix.products' using select statement: 'SELECT id, name, description, weight FROM informix.products' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,718 INFO Informix_Server|ifxserver|snapshot For table 'sysuser.informix.products_on_hand' using select statement: 'SELECT product_id, quantity FROM informix.products_on_hand' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,718 INFO Informix_Server|ifxserver|snapshot For table 'sysuser.informix.table_default_test' using select statement: 'SELECT id, test_col FROM informix.table_default_test' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,719 INFO Informix_Server|ifxserver|snapshot Exporting data from table 'sysuser.informix.customers' (1 of 5 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,726 INFO Informix_Server|ifxserver|snapshot Finished exporting 4 records for table 'sysuser.informix.customers' (1 of 5 tables); total duration '00:00:00.007' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,726 INFO Informix_Server|ifxserver|snapshot Exporting data from table 'sysuser.informix.orders' (2 of 5 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,728 INFO Informix_Server|ifxserver|snapshot Finished exporting 0 records for table 'sysuser.informix.orders' (2 of 5 tables); total duration '00:00:00.002' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,728 INFO Informix_Server|ifxserver|snapshot Exporting data from table 'sysuser.informix.products' (3 of 5 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,732 INFO Informix_Server|ifxserver|snapshot Finished exporting 9 records for table 'sysuser.informix.products' (3 of 5 tables); total duration '00:00:00.004' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,732 INFO Informix_Server|ifxserver|snapshot Exporting data from table 'sysuser.informix.products_on_hand' (4 of 5 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,734 INFO Informix_Server|ifxserver|snapshot Finished exporting 9 records for table 'sysuser.informix.products_on_hand' (4 of 5 tables); total duration '00:00:00.002' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,734 INFO Informix_Server|ifxserver|snapshot Exporting data from table 'sysuser.informix.table_default_test' (5 of 5 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,735 INFO Informix_Server|ifxserver|snapshot Finished exporting 0 records for table 'sysuser.informix.table_default_test' (5 of 5 tables); total duration '00:00:00.001' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-26 07:50:18,737 INFO Informix_Server|ifxserver|snapshot Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-26 07:50:18,737 INFO Informix_Server|ifxserver|snapshot Snapshot completed [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-26 07:50:18,738 INFO Informix_Server|ifxserver|snapshot Snapshot ended with SnapshotResult [status=COMPLETED, offset=InformixOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.informix.Source:STRUCT}, sourceInfo=SourceInfo [serverName=ifxserver, timestamp=2025-06-26T07:50:18Z, db=sysuser, snapshot=FALSE, commitLsn=17185435648, changeLsn=-1, txId=-1, beginLsn=-1], snapshotCompleted=true]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-26 07:50:18,740 INFO Informix_Server|ifxserver|streaming Connected metrics set to 'true' [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-26 07:50:18,742 INFO Informix_Server|ifxserver|streaming SignalProcessor started. Scheduling it every 5000ms [io.debezium.pipeline.signal.SignalProcessor] 2025-06-26 07:50:18,742 INFO Informix_Server|ifxserver|streaming Creating thread debezium-informixconnector-ifxserver-SignalProcessor [io.debezium.util.Threads] 2025-06-26 07:50:18,743 INFO Informix_Server|ifxserver|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-26 07:50:18,767 INFO Informix_Server|ifxserver|streaming Set CDCEngine's LSN to '17185435648' aka LSN(4,54f000) [io.debezium.connector.informix.InformixStreamingChangeEventSource] 2025-06-26 07:50:18,797 INFO Informix_Server|ifxserver|streaming Parsing default value for column 'test_col' with expression '0,0000000000' [io.debezium.connector.informix.InformixDefaultValueConverter] 2025-06-26 07:50:18,797 WARN Informix_Server|ifxserver|streaming Cannot parse column default value '0,0000000000' to type '3'. Expression evaluation is not supported. [io.debezium.connector.informix.InformixDefaultValueConverter] java.lang.NumberFormatException: Character , is neither a decimal digit number, decimal point, nor "e" notation exponential mark. at java.base/java.math.BigDecimal.(BigDecimal.java:608) at java.base/java.math.BigDecimal.(BigDecimal.java:497) at java.base/java.math.BigDecimal.(BigDecimal.java:903) at io.debezium.connector.informix.InformixDefaultValueConverter.lambda$numericDefaultValueMapper$5(InformixDefaultValueConverter.java:158) at io.debezium.connector.informix.InformixDefaultValueConverter.lambda$nullableDefaultValueMapper$3(InformixDefaultValueConverter.java:138) at io.debezium.connector.informix.InformixDefaultValueConverter.parseDefaultValue(InformixDefaultValueConverter.java:59) at io.debezium.relational.TableSchemaBuilder.lambda$parseDefaultValue$9(TableSchemaBuilder.java:505) at java.base/java.util.Optional.flatMap(Optional.java:289) at io.debezium.relational.TableSchemaBuilder.parseDefaultValue(TableSchemaBuilder.java:505) at io.debezium.relational.TableSchemaBuilder.addField(TableSchemaBuilder.java:439) at io.debezium.relational.TableSchemaBuilder.lambda$create$2(TableSchemaBuilder.java:200) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at io.debezium.relational.TableSchemaBuilder.create(TableSchemaBuilder.java:198) at io.debezium.relational.RelationalDatabaseSchema.buildAndRegisterSchema(RelationalDatabaseSchema.java:122) at io.debezium.connector.informix.InformixDatabaseSchema.applySchemaChange(InformixDatabaseSchema.java:59) at io.debezium.pipeline.EventDispatcher$SchemaChangeEventReceiver.schemaChangeEvent(EventDispatcher.java:696) at io.debezium.connector.informix.InformixStreamingChangeEventSource.lambda$handleMetadata$3(InformixStreamingChangeEventSource.java:427) at io.debezium.pipeline.EventDispatcher.dispatchSchemaChangeEvent(EventDispatcher.java:402) at io.debezium.connector.informix.InformixStreamingChangeEventSource.handleMetadata(InformixStreamingChangeEventSource.java:418) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:193) at io.debezium.connector.informix.InformixStreamingChangeEventSource.execute(InformixStreamingChangeEventSource.java:37) at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:322) at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:203) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:143) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) 2025-06-26 07:50:19,154 WARN || [Producer clientId=connector-producer-inventory-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 4 : {ifxserver=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-26 07:50:19,294 WARN || [Producer clientId=connector-producer-inventory-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {ifxserver.informix.customers=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-26 07:50:19,432 WARN || [Producer clientId=connector-producer-inventory-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 12 : {ifxserver.informix.products=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-26 07:50:19,558 WARN || [Producer clientId=connector-producer-inventory-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 16 : {ifxserver.informix.products_on_hand=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-26 07:51:18,016 INFO || WorkerSourceTask{id=inventory-connector-0} Committing offsets for 32 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask]