Using BOOTSTRAP_SERVERS=kafka:9092 Plugins are loaded from /kafka/connect Using the following environment variables: GROUP_ID=1 CONFIG_STORAGE_TOPIC=my_connect_configs OFFSET_STORAGE_TOPIC=my_connect_offsets STATUS_STORAGE_TOPIC=my_connect_statuses BOOTSTRAP_SERVERS=kafka:9092 REST_HOST_NAME=10.89.0.4 REST_PORT=8083 ADVERTISED_HOST_NAME=10.89.0.4 ADVERTISED_PORT=8083 KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter OFFSET_FLUSH_INTERVAL_MS=60000 OFFSET_FLUSH_TIMEOUT_MS=5000 SHUTDOWN_TIMEOUT=10000 --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=my_connect_offsets --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=my_connect_configs --- Setting property from CONNECT_GROUP_ID: group.id=1 --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=10.89.0.4 --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=10.89.0.4 --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter --- Setting property from CONNECT_REST_PORT: rest.port=8083 --- Setting property from CONNECT_STATUS_STORAGE_TOPIC: status.storage.topic=my_connect_statuses --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=kafka:9092 --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 2025-06-13 06:22:59,720 INFO || Kafka Connect worker initializing ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-13 06:22:59,723 INFO || WorkerInfo values: jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote=true, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 21.0.7, 21.0.7+6 jvm.classpath = /kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.12.0.jar:/kafka/libs/caffeine-2.9.3.jar:/kafka/libs/commons-beanutils-1.9.4.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-collections-3.2.2.jar:/kafka/libs/commons-digester-2.1.jar:/kafka/libs/commons-io-2.14.0.jar:/kafka/libs/commons-lang3-3.12.0.jar:/kafka/libs/commons-logging-1.2.jar:/kafka/libs/commons-validator-1.7.jar:/kafka/libs/connect-api-3.9.0.jar:/kafka/libs/connect-basic-auth-extension-3.9.0.jar:/kafka/libs/connect-json-3.9.0.jar:/kafka/libs/connect-mirror-3.9.0.jar:/kafka/libs/connect-mirror-client-3.9.0.jar:/kafka/libs/connect-runtime-3.9.0.jar:/kafka/libs/connect-transforms-3.9.0.jar:/kafka/libs/error_prone_annotations-2.10.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.16.2.jar:/kafka/libs/jackson-core-2.16.2.jar:/kafka/libs/jackson-databind-2.16.2.jar:/kafka/libs/jackson-dataformat-csv-2.16.2.jar:/kafka/libs/jackson-datatype-jdk8-2.16.2.jar:/kafka/libs/jackson-jaxrs-base-2.16.2.jar:/kafka/libs/jackson-jaxrs-json-provider-2.16.2.jar:/kafka/libs/jackson-module-afterburner-2.16.2.jar:/kafka/libs/jackson-module-jaxb-annotations-2.16.2.jar:/kafka/libs/jackson-module-scala_2.13-2.16.2.jar:/kafka/libs/jakarta.activation-api-1.2.2.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.3.jar:/kafka/libs/javassist-3.29.2-GA.jar:/kafka/libs/javax.activation-api-1.2.0.jar:/kafka/libs/javax.annotation-api-1.3.2.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.1.jar:/kafka/libs/jersey-client-2.39.1.jar:/kafka/libs/jersey-common-2.39.1.jar:/kafka/libs/jersey-container-servlet-2.39.1.jar:/kafka/libs/jersey-container-servlet-core-2.39.1.jar:/kafka/libs/jersey-hk2-2.39.1.jar:/kafka/libs/jersey-server-2.39.1.jar:/kafka/libs/jetty-client-9.4.56.v20240826.jar:/kafka/libs/jetty-continuation-9.4.56.v20240826.jar:/kafka/libs/jetty-http-9.4.56.v20240826.jar:/kafka/libs/jetty-io-9.4.56.v20240826.jar:/kafka/libs/jetty-security-9.4.56.v20240826.jar:/kafka/libs/jetty-server-9.4.56.v20240826.jar:/kafka/libs/jetty-servlet-9.4.56.v20240826.jar:/kafka/libs/jetty-servlets-9.4.56.v20240826.jar:/kafka/libs/jetty-util-9.4.56.v20240826.jar:/kafka/libs/jetty-util-ajax-9.4.56.v20240826.jar:/kafka/libs/jline-3.25.1.jar:/kafka/libs/jolokia-jvm-1.7.2.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.9.4.jar:/kafka/libs/jsr305-3.0.2.jar:/kafka/libs/kafka-clients-3.9.0.jar:/kafka/libs/kafka-group-coordinator-3.9.0.jar:/kafka/libs/kafka-group-coordinator-api-3.9.0.jar:/kafka/libs/kafka-metadata-3.9.0.jar:/kafka/libs/kafka-raft-3.9.0.jar:/kafka/libs/kafka-server-3.9.0.jar:/kafka/libs/kafka-server-common-3.9.0.jar:/kafka/libs/kafka-shell-3.9.0.jar:/kafka/libs/kafka-storage-3.9.0.jar:/kafka/libs/kafka-storage-api-3.9.0.jar:/kafka/libs/kafka-streams-3.9.0.jar:/kafka/libs/kafka-streams-examples-3.9.0.jar:/kafka/libs/kafka-streams-scala_2.13-3.9.0.jar:/kafka/libs/kafka-streams-test-utils-3.9.0.jar:/kafka/libs/kafka-tools-3.9.0.jar:/kafka/libs/kafka-tools-api-3.9.0.jar:/kafka/libs/kafka-transaction-coordinator-3.9.0.jar:/kafka/libs/kafka_2.13-3.9.0.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.9.6.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.111.Final.jar:/kafka/libs/netty-codec-4.1.111.Final.jar:/kafka/libs/netty-common-4.1.111.Final.jar:/kafka/libs/netty-handler-4.1.111.Final.jar:/kafka/libs/netty-resolver-4.1.111.Final.jar:/kafka/libs/netty-transport-4.1.111.Final.jar:/kafka/libs/netty-transport-classes-epoll-4.1.111.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.111.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.111.Final.jar:/kafka/libs/opentelemetry-proto-1.0.0-alpha.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/pcollections-4.0.1.jar:/kafka/libs/plexus-utils-3.5.1.jar:/kafka/libs/protobuf-java-3.25.5.jar:/kafka/libs/reflections-0.10.2.jar:/kafka/libs/reload4j-1.2.25.jar:/kafka/libs/rocksdbjni-7.9.2.jar:/kafka/libs/scala-collection-compat_2.13-2.10.0.jar:/kafka/libs/scala-java8-compat_2.13-1.0.2.jar:/kafka/libs/scala-library-2.13.14.jar:/kafka/libs/scala-logging_2.13-3.9.5.jar:/kafka/libs/scala-reflect-2.13.14.jar:/kafka/libs/slf4j-api-1.7.36.jar:/kafka/libs/slf4j-reload4j-1.7.36.jar:/kafka/libs/snappy-java-1.1.10.5.jar:/kafka/libs/swagger-annotations-2.2.8.jar:/kafka/libs/trogdor-3.9.0.jar:/kafka/libs/zookeeper-3.8.4.jar:/kafka/libs/zookeeper-jute-3.8.4.jar:/kafka/libs/zstd-jni-1.5.6-4.jar os.spec = Linux, amd64, 5.4.17-2136.337.5.1.el8uek.x86_64 os.vcpus = 4 [org.apache.kafka.connect.runtime.WorkerInfo] 2025-06-13 06:22:59,724 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-13 06:22:59,780 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:22:59,864 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,061 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,068 INFO || Loading plugin from: /kafka/connect/debezium-connector-ibmi [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,085 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,119 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-ibmi/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,130 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,156 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,189 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,190 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,200 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,233 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,233 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,242 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,276 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,276 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,288 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,337 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,337 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,364 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,400 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,556 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,570 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,607 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,634 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,674 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,708 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,755 INFO || Loading plugin from: /kafka/connect/debezium-connector-mariadb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,774 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,804 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mariadb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,806 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,816 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,854 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,857 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,898 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2025-06-13 06:23:00,929 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,929 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,935 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@5a07e868 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,936 INFO || Scanning plugins with ServiceLoaderScanner took 1157 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:00,937 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:01,632 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:01,632 INFO || Loading plugin from: /kafka/connect/debezium-connector-ibmi [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:01,841 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-ibmi/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:01,841 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,549 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,549 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,606 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,607 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,659 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,660 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,755 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:02,755 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:03,699 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:03,713 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:03,808 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:03,813 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,586 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,586 INFO || Loading plugin from: /kafka/connect/debezium-connector-mariadb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,860 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mariadb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,861 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,983 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:04,983 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:05,759 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:05,760 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:06,765 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@5a07e868 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:06,766 INFO || Scanning plugins with ReflectionScanner took 5829 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2025-06-13 06:23:06,771 WARN || One or more plugins are missing ServiceLoader manifests may not be usable with plugin.discovery=service_load: [ file:/kafka/connect/debezium-connector-mongodb/ io.debezium.connector.mongodb.MongoDbSinkConnector sink 3.1.1.Final file:/kafka/connect/debezium-connector-postgres/ io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent transformation 3.1.1.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords transformation 3.1.1.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.RemoveField transformation 3.1.1.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.ReplaceFieldValue transformation 3.1.1.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.connector.vitess.transforms.UseLocalVgtid transformation 3.1.1.Final file:/kafka/connect/debezium-connector-db2/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-ibmi/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-informix/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-jdbc/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-mariadb/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-mongodb/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-mysql/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-oracle/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-postgres/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-spanner/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-sqlserver/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final file:/kafka/connect/debezium-connector-vitess/ io.debezium.transforms.VectorToJsonConverter transformation 3.1.1.Final ] Read the documentation at https://kafka.apache.org/documentation.html#connect_plugindiscovery for instructions on migrating your plugins to take advantage of the performance improvements of service_load mode. To silence this warning, set plugin.discovery=only_scan in the worker config. [org.apache.kafka.connect.runtime.isolation.Plugins] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,772 INFO || Added plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.connector.db2as400.smt.RepackageJavaFriendlySchemaRenamer' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.connector.vitess.transforms.UseLocalVgtid' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,773 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,774 INFO || Added plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.connector.vitess.transforms.ReplaceFieldValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.connector.mariadb.rest.DebeziumMariaDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,775 INFO || Added plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.transforms.VectorToJsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,776 INFO || Added plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,777 INFO || Added plugin 'io.debezium.connector.vitess.transforms.RemoveField' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,778 INFO || Added plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,778 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,778 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,778 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,780 INFO || Added alias 'VitessConnector' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'As400RpcConnector' to plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'VectorToJsonConverter' to plugin 'io.debezium.transforms.VectorToJsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'MirrorCheckpointConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'HeaderToValue' to plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'RepackageJavaFriendlySchemaRenamer' to plugin 'io.debezium.connector.db2as400.smt.RepackageJavaFriendlySchemaRenamer' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'RemoveField' to plugin 'io.debezium.connector.vitess.transforms.RemoveField' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,781 INFO || Added alias 'SqlServerConnector' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'DirectoryConfigProvider' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'TimezoneConverter' to plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'DebeziumPostgres' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'AllConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'Directory' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'DebeziumMariaDb' to plugin 'io.debezium.connector.mariadb.rest.DebeziumMariaDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'BooleanConverter' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'JsonConverter' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'DebeziumMySql' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,782 INFO || Added alias 'FilterTransactionTopicRecords' to plugin 'io.debezium.connector.vitess.transforms.FilterTransactionTopicRecords' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'JdbcSinkConnector' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'ReplaceFieldValue' to plugin 'io.debezium.connector.vitess.transforms.ReplaceFieldValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'DebeziumPostgresConnectRestExtension' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'SpannerConnector' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'MongoDbSinkConnector' to plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'DecodeLogicalDecodingMessageContent' to plugin 'io.debezium.connector.postgresql.transforms.DecodeLogicalDecodingMessageContent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'FileConfigProvider' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'SchemaChangeEventFilter' to plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'ConvertCloudEventToSaveableForm' to plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,783 INFO || Added alias 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'FloatConverter' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'Spanner' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'MariaDb' to plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'DebeziumSqlServerConnectRestExtension' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'UseLocalVgtid' to plugin 'io.debezium.connector.vitess.transforms.UseLocalVgtid' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'MirrorHeartbeatConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'PrincipalConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,784 INFO || Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'Informix' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'DebeziumMariaDbConnectRestExtension' to plugin 'io.debezium.connector.mariadb.rest.DebeziumMariaDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'ExtractNewDocumentState' to plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'CloudEventsConverter' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'DebeziumOracle' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'EnvVar' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'EnvVarConfigProvider' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'Boolean' to plugin 'org.apache.kafka.connect.converters.BooleanConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'MySqlConnector' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'MariaDbConnector' to plugin 'io.debezium.connector.mariadb.MariaDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'DebeziumSqlServer' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'PartitionRouting' to plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,785 INFO || Added alias 'MongoDbSink' to plugin 'io.debezium.connector.mongodb.MongoDbSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'StringConverter' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'MongoDbConnector' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'IntegerConverter' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'LongConverter' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'ExtractSchemaToNewRecord' to plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'BinaryData' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'ShortConverter' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'DebeziumOracleConnectRestExtension' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'DebeziumMongoDb' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'Db2' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,786 INFO || Added alias 'Db2Connector' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'Vitess' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'InformixConnector' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'DebeziumMongoDbConnectRestExtension' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'ExtractChangedRecordState' to plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'OracleConnector' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,787 INFO || Added alias 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'DebeziumMySqlConnectRestExtension' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'JdbcSink' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'NoneConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'File' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'DoubleConverter' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'BinaryDataConverter' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'TimescaleDb' to plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'MirrorSourceConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'PostgresConnector' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'MongoEventRouter' to plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,788 INFO || Added alias 'As400Rpc' to plugin 'io.debezium.connector.db2as400.As400RpcConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2025-06-13 06:23:06,826 INFO || DistributedConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null auto.include.jmx.reporter = true bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = config.providers = [] config.storage.replication.factor = 1 config.storage.topic = my_connect_configs connect.protocol = sessioned connections.max.idle.ms = 540000 connector.client.config.override.policy = All exactly.once.source.support = disabled group.id = 1 header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter heartbeat.interval.ms = 3000 inter.worker.key.generation.algorithm = HmacSHA256 inter.worker.key.size = null inter.worker.key.ttl.ms = 3600000 inter.worker.signature.algorithm = HmacSHA256 inter.worker.verification.algorithms = [HmacSHA256] key.converter = class org.apache.kafka.connect.json.JsonConverter listeners = [http://:8083] metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 offset.flush.interval.ms = 60000 offset.flush.timeout.ms = 5000 offset.storage.partitions = 25 offset.storage.replication.factor = 1 offset.storage.topic = my_connect_offsets plugin.discovery = hybrid_warn plugin.path = [/kafka/connect] rebalance.timeout.ms = 60000 receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 40000 response.http.headers.config = rest.advertised.host.name = 10.89.0.4 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null scheduled.rebalance.max.delay.ms = 300000 security.protocol = PLAINTEXT send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS status.storage.partitions = 5 status.storage.replication.factor = 1 status.storage.topic = my_connect_statuses task.shutdown.graceful.timeout.ms = 10000 topic.creation.enable = true topic.tracking.allow.reset = true topic.tracking.enable = true value.converter = class org.apache.kafka.connect.json.JsonConverter worker.sync.timeout.ms = 3000 worker.unsync.backoff.ms = 300000 [org.apache.kafka.connect.runtime.distributed.DistributedConfig] 2025-06-13 06:23:06,827 INFO || Creating Kafka admin client [org.apache.kafka.connect.runtime.WorkerConfig] 2025-06-13 06:23:06,829 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:06,885 INFO || These configurations '[config.storage.topic, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:06,885 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:06,885 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:06,885 INFO || Kafka startTimeMs: 1749795786885 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,229 INFO || Kafka cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.connect.runtime.WorkerConfig] 2025-06-13 06:23:07,231 INFO || App info kafka.admin.client for adminclient-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,237 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:07,237 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:07,237 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:07,243 INFO || PublicConfig values: access.control.allow.methods = access.control.allow.origin = admin.listeners = null listeners = [http://:8083] response.http.headers.config = rest.advertised.host.name = 10.89.0.4 rest.advertised.listener = null rest.advertised.port = 8083 rest.extension.classes = [] ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS topic.tracking.allow.reset = true topic.tracking.enable = true [org.apache.kafka.connect.runtime.rest.RestServerConfig$PublicConfig] 2025-06-13 06:23:07,253 INFO || Logging initialized @8205ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] 2025-06-13 06:23:07,290 INFO || Added connector for http://:8083 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,290 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,314 INFO || jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 21.0.7+6 [org.eclipse.jetty.server.Server] 2025-06-13 06:23:07,349 INFO || Started http_8083@5672b35{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} [org.eclipse.jetty.server.AbstractConnector] 2025-06-13 06:23:07,350 INFO || Started @8302ms [org.eclipse.jetty.server.Server] 2025-06-13 06:23:07,365 INFO || Advertised URI: http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,365 INFO || REST server listening at http://10.89.0.4:8083/, advertising URL http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,366 INFO || Advertised URI: http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,366 INFO || REST admin endpoints at http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,366 INFO || Advertised URI: http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,366 INFO || Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden [org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy] 2025-06-13 06:23:07,371 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:07,386 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,386 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,386 INFO || Kafka startTimeMs: 1749795787386 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,392 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:07,392 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = false [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:07,404 INFO || Advertised URI: http://10.89.0.4:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,423 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,423 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,424 INFO || Kafka startTimeMs: 1749795787423 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,426 INFO || Kafka Connect worker initialization took 7705ms [org.apache.kafka.connect.cli.AbstractConnectCli] 2025-06-13 06:23:07,426 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] 2025-06-13 06:23:07,431 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,431 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:07,432 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:07,432 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2025-06-13 06:23:07,433 INFO || Starting KafkaBasedLog with topic my_connect_offsets reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:07,436 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = 1-shared-admin connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:07,440 INFO || These configurations '[config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:07,440 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,441 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,441 INFO || Kafka startTimeMs: 1749795787440 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:07,458 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:07,493 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] 2025-06-13 06:23:07,493 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] 2025-06-13 06:23:07,494 INFO || node0 Scavenging every 660000ms [org.eclipse.jetty.server.session] 2025-06-13 06:23:08,038 INFO || Started o.e.j.s.ServletContextHandler@69825ffc{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] 2025-06-13 06:23:08,038 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:08,039 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] 2025-06-13 06:23:08,222 INFO || Created topic (name=my_connect_offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-13 06:23:08,235 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-offsets compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,256 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,275 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,275 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,275 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,275 INFO || Kafka startTimeMs: 1749795788275 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,283 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-offsets client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,286 INFO || [Producer clientId=1-offsets] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,293 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,318 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,318 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,319 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,319 INFO || Kafka startTimeMs: 1749795788318 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,330 INFO || [Consumer clientId=1-offsets, groupId=1] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,338 INFO || [Consumer clientId=1-offsets, groupId=1] Assigned to partition(s): my_connect_offsets-0, my_connect_offsets-5, my_connect_offsets-10, my_connect_offsets-20, my_connect_offsets-15, my_connect_offsets-9, my_connect_offsets-11, my_connect_offsets-4, my_connect_offsets-16, my_connect_offsets-17, my_connect_offsets-3, my_connect_offsets-24, my_connect_offsets-23, my_connect_offsets-13, my_connect_offsets-18, my_connect_offsets-22, my_connect_offsets-2, my_connect_offsets-8, my_connect_offsets-12, my_connect_offsets-19, my_connect_offsets-14, my_connect_offsets-1, my_connect_offsets-6, my_connect_offsets-7, my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:08,340 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,340 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,340 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,340 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,341 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,381 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,381 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,382 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,383 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,384 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,384 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,384 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2025-06-13 06:23:08,385 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:08,385 INFO || Starting KafkaBasedLog with topic my_connect_statuses reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,484 INFO || Created topic (name=my_connect_statuses, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-13 06:23:08,485 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-statuses compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 0 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,485 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,491 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,491 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,491 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,491 INFO || Kafka startTimeMs: 1749795788491 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,492 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-statuses client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,492 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,497 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,498 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,498 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,498 INFO || Kafka startTimeMs: 1749795788498 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,501 INFO || [Producer clientId=1-statuses] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,503 INFO || [Consumer clientId=1-statuses, groupId=1] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Assigned to partition(s): my_connect_statuses-0, my_connect_statuses-1, my_connect_statuses-4, my_connect_statuses-2, my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,505 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,518 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,518 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,518 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,518 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,518 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,519 INFO || Finished reading KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,519 INFO || Started KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,524 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-13 06:23:08,524 INFO || Starting KafkaBasedLog with topic my_connect_configs reportErrorsToCallback=false [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,564 INFO || Created topic (name=my_connect_configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:9092 [org.apache.kafka.connect.util.TopicAdmin] 2025-06-13 06:23:08,565 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = 1-configs compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,566 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,570 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:08,570 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,570 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,570 INFO || Kafka startTimeMs: 1749795788570 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,571 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = 1-configs client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = 1 group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,572 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:08,576 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:08,576 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,576 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,576 INFO || Kafka startTimeMs: 1749795788576 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:08,576 INFO || [Producer clientId=1-configs] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,581 INFO || [Consumer clientId=1-configs, groupId=1] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:08,583 INFO || [Consumer clientId=1-configs, groupId=1] Assigned to partition(s): my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:08,583 INFO || [Consumer clientId=1-configs, groupId=1] Seeking to earliest offset of partition my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,591 INFO || [Consumer clientId=1-configs, groupId=1] Resetting offset for partition my_connect_configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:23:08,595 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,595 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2025-06-13 06:23:08,598 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-13 06:23:08,607 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:09,341 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,359 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,360 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,378 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,390 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=1, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,442 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=1, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:09,442 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:09,443 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:09,443 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset -1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:09,443 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:09,507 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:15,885 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:23:16,245 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:16,247 INFO || Successfully tested connection for jdbc:mysql://10.0.1.6:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'repl_s' [io.debezium.connector.binlog.BinlogConnector] 2025-06-13 06:23:16,252 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:23:16,255 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2025-06-13 06:23:16,269 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector employee-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,274 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,274 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,280 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=2, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,288 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=2, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,288 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=2, connectorIds=[employee-connector], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,288 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,290 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connector employee-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,294 INFO || Creating connector employee-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,294 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:23:16,295 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,301 INFO || Instantiated connector employee-connector with version 3.1.1.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,305 INFO || Finished creating connector employee-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,306 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,314 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:23:16,314 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,337 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Tasks [employee-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,342 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,343 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,346 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=3, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,352 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=3, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,352 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 3 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=4, connectorIds=[employee-connector], taskIds=[employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,353 INFO || 10.89.0.4 - - [13/Jun/2025:06:23:15 +0000] "POST /connectors/ HTTP/1.1" 201 640 "-" "curl/7.61.1" 596 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:16,353 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 4 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,355 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task employee-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,358 INFO || Creating task employee-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,359 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = employee-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:23:16,359 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = employee-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,361 INFO || TaskConfig values: task.class = class io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:23:16,363 INFO || Instantiated task employee-connector-0 with version 3.1.1.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,364 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,364 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,365 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,365 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,365 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,367 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,368 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:23:16,368 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,369 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-employee-connector-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:16,369 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,373 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:16,373 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,373 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,373 INFO || Kafka startTimeMs: 1749795796373 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,379 INFO || [Producer clientId=connector-producer-employee-connector-0] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:16,387 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,389 INFO || Starting MySqlConnectorTask with configuration: connector.class = io.debezium.connector.mysql.MySqlConnector snapshot.locking.mode = extended database.user = repl_s database.server.id = 1234 schema.history.internal.kafka.bootstrap.servers = kafka:9092 database.port = 3306 database.ssl.mode = preferred topic.prefix = dbserver2 schema.history.internal.kafka.topic = schema-changes.testcdc task.class = io.debezium.connector.mysql.MySqlConnectorTask database.hostname = 10.0.1.6 database.password = ******** name = employee-connector log.level = DEBUG table.include.list = testcdc.employee database.include.list = testcdc snapshot.mode = always [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:23:16,389 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:23:16,391 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:23:16,399 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2025-06-13 06:23:16,409 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector mysql-sink-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,409 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,409 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,411 INFO || 10.89.0.4 - - [13/Jun/2025:06:23:16 +0000] "POST /connectors/ HTTP/1.1" 201 783 "-" "curl/7.61.1" 69 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:16,414 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=4, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,417 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=4, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,418 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 4 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=5, connectorIds=[mysql-sink-connector, employee-connector], taskIds=[employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,418 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 5 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,419 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connector mysql-sink-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,421 INFO || Creating connector mysql-sink-connector of type io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,422 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:23:16,422 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,423 INFO || Instantiated connector mysql-sink-connector with version 3.1.1.Final of type class io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,426 INFO || Finished creating connector mysql-sink-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,426 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,428 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:23:16,429 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,437 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:16,458 INFO || No previous offsets found [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:23:16,459 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Tasks [mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,459 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,459 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,462 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=5, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,466 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=5, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:23:16,466 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 5 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=9, connectorIds=[mysql-sink-connector, employee-connector], taskIds=[mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2, employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,466 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 9 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,473 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,473 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,475 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,476 INFO || Creating task mysql-sink-connector-2 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,476 INFO || Creating task mysql-sink-connector-1 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,487 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:23:16,495 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,496 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:23:16,476 INFO || Creating task mysql-sink-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,497 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:23:16,497 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,497 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:23:16,488 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:23:16,498 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,498 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:23:16,501 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,501 INFO || Instantiated task mysql-sink-connector-0 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,501 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,501 INFO || Instantiated task mysql-sink-connector-2 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,501 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,501 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,501 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,501 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,501 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-2 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,501 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,502 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-2 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,502 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,502 INFO || Instantiated task mysql-sink-connector-1 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,502 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,502 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:23:16,502 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-1 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,502 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-1 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,502 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-1 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,503 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,503 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:23:16,504 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,504 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-1 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,505 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,502 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,505 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,506 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,506 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:23:16,507 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,507 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-0 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,508 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,502 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-2 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,509 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,509 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,509 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,509 INFO || Kafka startTimeMs: 1749795796509 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,512 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=dbserver2-schemahistory, bootstrap.servers=kafka:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=dbserver2-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:23:16,512 INFO || KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, bootstrap.servers=kafka:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=dbserver2-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:23:16,513 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:23:16,513 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = db-history-config-check [io.debezium.util.Threads] 2025-06-13 06:23:16,513 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:23:16,514 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:23:16,515 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-2 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,515 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,515 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,515 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,515 INFO || Kafka startTimeMs: 1749795796515 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,515 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [kafka:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:23:16,516 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,520 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,521 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:16,527 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,527 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,529 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,529 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,529 INFO || Kafka startTimeMs: 1749795796527 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,530 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,531 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,532 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,532 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,532 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,532 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,533 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:16,534 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,534 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,534 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,534 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,534 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,534 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,535 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,539 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,540 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,540 INFO || Kafka startTimeMs: 1749795796539 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,551 INFO || [Producer clientId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:16,560 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:23:16,563 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:23:16,564 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,564 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,565 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:23:16,604 INFO || HHH000412: Hibernate ORM core version 6.4.8.Final [org.hibernate.Version] 2025-06-13 06:23:16,634 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:16,651 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:23:16,651 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:23:16,652 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:23:16,654 INFO || Snapshot mode is set to ALWAYS, not checking exiting offset. [io.debezium.snapshot.mode.AlwaysSnapshotter] 2025-06-13 06:23:16,654 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:23:16,662 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:23:16,663 INFO || Connector started for the first time. [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:23:16,663 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:23:16,664 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:23:16,668 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,668 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,668 INFO || Kafka startTimeMs: 1749795796668 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,676 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:16,678 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:16,678 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:16,682 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,682 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,682 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,682 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,685 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,686 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:16,688 INFO || These configurations '[enable.idempotence, value.serializer, batch.size, max.in.flight.requests.per.connection, buffer.memory, key.serializer]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:23:16,688 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,688 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,688 INFO || Kafka startTimeMs: 1749795796688 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,758 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:23:16,760 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:23:16,760 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,761 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,761 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,761 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,764 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:23:16,765 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,765 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,765 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,765 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,770 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,770 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,770 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,771 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,783 INFO || MLog clients using slf4j logging. [com.mchange.v2.log.MLog] 2025-06-13 06:23:16,791 INFO || Database schema history topic '(name=schema-changes.testcdc, numPartitions=1, replicationFactor=default, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:23:16,792 INFO || App info kafka.admin.client for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:23:16,793 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,793 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,794 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:23:16,794 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:23:16,820 INFO || No previous offset found [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:23:16,848 INFO || Initializing c3p0-0.9.5.5 [built 11-December-2019 22:18:33 -0800; debug? true; trace: 10] [com.mchange.v2.c3p0.C3P0Registry] 2025-06-13 06:23:16,859 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = SignalProcessor [io.debezium.util.Threads] 2025-06-13 06:23:16,938 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,942 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,941 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:23:16,959 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = change-event-source-coordinator [io.debezium.util.Threads] 2025-06-13 06:23:16,967 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = blocking-snapshot [io.debezium.util.Threads] 2025-06-13 06:23:16,970 INFO || Creating thread debezium-mysqlconnector-dbserver2-change-event-source-coordinator [io.debezium.util.Threads] 2025-06-13 06:23:16,980 INFO || WorkerSourceTask{id=employee-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2025-06-13 06:23:16,989 INFO MySQL|dbserver2|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:23:16,989 INFO MySQL|dbserver2|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:23:16,996 INFO MySQL|dbserver2|snapshot Snapshot mode is set to ALWAYS, not checking exiting offset. [io.debezium.snapshot.mode.AlwaysSnapshotter] 2025-06-13 06:23:16,996 INFO MySQL|dbserver2|snapshot According to the connector configuration both schema and data will be snapshot. [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:16,997 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@62228caa [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@bad325b5 [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|6538fff5, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@633a0563 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|78afbaff, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|49e184f7, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:23:16,998 INFO MySQL|dbserver2|snapshot Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,004 INFO MySQL|dbserver2|snapshot Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,004 INFO MySQL|dbserver2|snapshot Read list of available databases [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:17,007 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@892f476d [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@5d66e0a1 [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|331c3080, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@6635ebd6 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|1205cf09, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|3ac63e19, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:23:17,011 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@c7c25bb [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@644d89d [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|33de0439, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@8964bac5 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|6617fddc, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|12cfe064, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:23:17,012 INFO MySQL|dbserver2|snapshot list of available databases is: [afc, airport, airportdb, information_schema, lakehouse, llm, mydb, mysql, mysql_audit, mysql_option, mysql_task_management, pawn, pawn2, pawn3, performance_schema, ryan, sakilacafe, sink, sys, testcdc, thaidb, tpch, wordpress, wp] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:17,012 INFO MySQL|dbserver2|snapshot Read list of available tables in each database [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot snapshot continuing with database(s): [testcdc] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table mysql_audit.audit_log_filter to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table testcdc.employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.product to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_groups to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table airport.baggage to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_downloadable_product_permissions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table pawn2.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table airportdb.flight_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table afc.budget to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_users to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table llm.ryan_vector to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_postmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table wp.wp_usermeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table pawn2.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table pawn.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,067 INFO MySQL|dbserver2|snapshot Adding table pawn3.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table llm.b2c6eba9f6e15b53039ed17e740aedce to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_actions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_category_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table sink.dbserver2_testcdc_employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_commentmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table airportdb.airplane to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_admin_note_actions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table airport.flights to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn2.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_rate_limits to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table airportdb.airline to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn3.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table llm.FAQ_PCB to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn3.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn3.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table mysql_option.option_usage to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table tpch.customer to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_payment_tokenmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport_reachable to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_download_directories to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wt_iew_mapping_template to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.branch to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table pawn2.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_logs to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table llm.t1 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_id_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table airportdb.employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wp.wp_woocommerce_order_itemmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table mydb.web_embeddings_trx to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table afc.class to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_posts to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_terms to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_log_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_jetpack_sync_queue to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wp.wp_woocommerce_order_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,068 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_admin_notes to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table llm.policy to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table airportdb.weatherdata to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wt_iew_action_history to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_comments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table llm.web_embeddings_trx to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_links to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_termmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table airportdb.airplane_type to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table tpch.lineitem to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table tpch.region to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_api_keys to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table pawn.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_attributes_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_tax_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table llm.web_embeddings to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_coupon_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_product_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_reserved_stock to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table tpch.part to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table airport.passengers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_addresses to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table pawn2.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table pawn2.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table ryan.t2 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_term_taxonomy to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table airport.cargo to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table ryan.t1 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_sessions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_operational_data to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table llm.shadab to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table tpch.partsupp to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.sales to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_tax_rate_locations to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_customer_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table mydb.web_embeddings to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_claims to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,069 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_payment_tokens to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.booking to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_usermeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table tpch.orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table pawn3.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.passengerdetails to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_users to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders_addresses to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_attribute_taxonomies to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.passenger to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_download_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_orders_meta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.flight to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zone_locations to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_meta_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport_geo to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_options to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zones to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table tpch.nation to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_order_itemmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table pawn.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_term_relationships to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders_operational_data to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_tax_rate_classes to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_webhooks to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table tpch.supplier to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zone_methods to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table pawn3.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_order_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table lakehouse.test to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_tax_rates to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table pawn.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_customer_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wp.wp_postmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table mysql_audit.audit_log_user to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_stats to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table airportdb.flightschedule to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,070 INFO MySQL|dbserver2|snapshot Adding table pawn.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,071 INFO MySQL|dbserver2|snapshot Adding table wp.wp_posts to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,078 INFO MySQL|dbserver2|snapshot Created connection pool with 1 threads [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,078 INFO MySQL|dbserver2|snapshot Snapshot step 3 - Locking captured tables [testcdc.employee] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,086 INFO MySQL|dbserver2|snapshot Flush and obtain global read lock to prevent writes to database [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:17,097 INFO MySQL|dbserver2|snapshot Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,100 INFO MySQL|dbserver2|snapshot Read binlog position of MySQL primary server [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2025-06-13 06:23:17,113 INFO MySQL|dbserver2|snapshot using binlog 'binary-log.005358' at position '768' and gtid '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2025-06-13 06:23:17,115 INFO MySQL|dbserver2|snapshot Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:17,115 INFO MySQL|dbserver2|snapshot All eligible tables schema should be captured, capturing: [afc.budget, afc.class, airport.baggage, airport.cargo, airport.flights, airport.passengers, airportdb.airline, airportdb.airplane, airportdb.airplane_type, airportdb.airport, airportdb.airport_geo, airportdb.airport_reachable, airportdb.booking, airportdb.employee, airportdb.flight, airportdb.flight_log, airportdb.flightschedule, airportdb.passenger, airportdb.passengerdetails, airportdb.weatherdata, lakehouse.test, llm.FAQ_PCB, llm.b2c6eba9f6e15b53039ed17e740aedce, llm.policy, llm.ryan_vector, llm.shadab, llm.t1, llm.web_embeddings, llm.web_embeddings_trx, mydb.web_embeddings, mydb.web_embeddings_trx, mysql_audit.audit_log_filter, mysql_audit.audit_log_user, mysql_option.option_usage, mysql_task_management.task_id_impl, mysql_task_management.task_impl, mysql_task_management.task_log_impl, pawn.branches, pawn.customers, pawn.forfeited_items, pawn.items, pawn.loans, pawn.repayments, pawn2.branches, pawn2.customers, pawn2.forfeited_items, pawn2.items, pawn2.loans, pawn2.repayments, pawn3.branches, pawn3.customers, pawn3.forfeited_items, pawn3.items, pawn3.loans, pawn3.repayments, ryan.t1, ryan.t2, sakilacafe.branch, sakilacafe.product, sakilacafe.sales, sink.dbserver2_testcdc_employee, testcdc.employee, tpch.customer, tpch.lineitem, tpch.nation, tpch.orders, tpch.part, tpch.partsupp, tpch.region, tpch.supplier, wordpress.wp_actionscheduler_actions, wordpress.wp_actionscheduler_claims, wordpress.wp_actionscheduler_groups, wordpress.wp_actionscheduler_logs, wordpress.wp_commentmeta, wordpress.wp_comments, wordpress.wp_jetpack_sync_queue, wordpress.wp_links, wordpress.wp_options, wordpress.wp_postmeta, wordpress.wp_posts, wordpress.wp_term_relationships, wordpress.wp_term_taxonomy, wordpress.wp_termmeta, wordpress.wp_terms, wordpress.wp_usermeta, wordpress.wp_users, wordpress.wp_wc_admin_note_actions, wordpress.wp_wc_admin_notes, wordpress.wp_wc_category_lookup, wordpress.wp_wc_customer_lookup, wordpress.wp_wc_download_log, wordpress.wp_wc_order_addresses, wordpress.wp_wc_order_coupon_lookup, wordpress.wp_wc_order_operational_data, wordpress.wp_wc_order_product_lookup, wordpress.wp_wc_order_stats, wordpress.wp_wc_order_tax_lookup, wordpress.wp_wc_orders, wordpress.wp_wc_orders_meta, wordpress.wp_wc_product_attributes_lookup, wordpress.wp_wc_product_download_directories, wordpress.wp_wc_product_meta_lookup, wordpress.wp_wc_rate_limits, wordpress.wp_wc_reserved_stock, wordpress.wp_wc_tax_rate_classes, wordpress.wp_wc_webhooks, wordpress.wp_woocommerce_api_keys, wordpress.wp_woocommerce_attribute_taxonomies, wordpress.wp_woocommerce_downloadable_product_permissions, wordpress.wp_woocommerce_log, wordpress.wp_woocommerce_order_itemmeta, wordpress.wp_woocommerce_order_items, wordpress.wp_woocommerce_payment_tokenmeta, wordpress.wp_woocommerce_payment_tokens, wordpress.wp_woocommerce_sessions, wordpress.wp_woocommerce_shipping_zone_locations, wordpress.wp_woocommerce_shipping_zone_methods, wordpress.wp_woocommerce_shipping_zones, wordpress.wp_woocommerce_tax_rate_locations, wordpress.wp_woocommerce_tax_rates, wordpress.wp_wt_iew_action_history, wordpress.wp_wt_iew_mapping_template, wp.wp_postmeta, wp.wp_posts, wp.wp_usermeta, wp.wp_users, wp.wp_wc_customer_lookup, wp.wp_wc_orders, wp.wp_wc_orders_addresses, wp.wp_wc_orders_operational_data, wp.wp_woocommerce_order_itemmeta, wp.wp_woocommerce_order_items] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,260 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:23:18,260 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:23:18,262 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:23:18,289 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:23:18,289 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:23:18,292 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:23:18,323 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:23:18,323 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:23:18,323 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:23:18,327 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:23:18,327 INFO || WorkerSinkTask{id=mysql-sink-connector-1} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,328 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:23:18,328 INFO || WorkerSinkTask{id=mysql-sink-connector-0} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,328 INFO || WorkerSinkTask{id=mysql-sink-connector-0} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,329 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:23:18,329 INFO || WorkerSinkTask{id=mysql-sink-connector-2} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,329 INFO || WorkerSinkTask{id=mysql-sink-connector-2} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,333 INFO || WorkerSinkTask{id=mysql-sink-connector-1} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:23:18,339 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:18,340 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:18,341 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,343 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,344 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,343 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:23:18,347 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,350 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,350 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,352 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,352 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,359 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=1, memberId='connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,360 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-2-56af335f-570d-4423-b46e-e76d703cc476 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,360 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,362 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-1-1fbf92d1-8798-43d1-9159-f0a9dcf5e603 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,362 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,363 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Finished assignment for group at generation 1: {connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482=Assignment(partitions=[])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,368 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] SyncGroup failed: The group began another rebalance. Need to re-join the group. Sent generation was Generation{generationId=1, memberId='connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,368 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Request joining group due to: rebalance failed due to 'The group is rebalancing, so a rejoin is needed.' (RebalanceInProgressException) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,368 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,370 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-2-56af335f-570d-4423-b46e-e76d703cc476', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,370 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,370 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-1-1fbf92d1-8798-43d1-9159-f0a9dcf5e603', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,370 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Finished assignment for group at generation 2: {connector-consumer-mysql-sink-connector-2-56af335f-570d-4423-b46e-e76d703cc476=Assignment(partitions=[]), connector-consumer-mysql-sink-connector-1-1fbf92d1-8798-43d1-9159-f0a9dcf5e603=Assignment(partitions=[]), connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482=Assignment(partitions=[])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,373 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-2-56af335f-570d-4423-b46e-e76d703cc476', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,374 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,374 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-1-1fbf92d1-8798-43d1-9159-f0a9dcf5e603', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,374 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,374 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:23:18,374 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:23:18,376 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=2, memberId='connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,376 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:23:18,376 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:23:18,527 INFO MySQL|dbserver2|snapshot Reading structure of database 'afc' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,613 INFO MySQL|dbserver2|snapshot Reading structure of database 'airport' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,643 INFO MySQL|dbserver2|snapshot Reading structure of database 'airportdb' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,801 INFO MySQL|dbserver2|snapshot Reading structure of database 'lakehouse' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,804 INFO MySQL|dbserver2|snapshot Reading structure of database 'llm' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,832 INFO MySQL|dbserver2|snapshot Reading structure of database 'mydb' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,839 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_audit' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,850 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_option' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,856 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_task_management' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,890 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,905 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn2' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,919 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn3' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,933 INFO MySQL|dbserver2|snapshot Reading structure of database 'ryan' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,939 INFO MySQL|dbserver2|snapshot Reading structure of database 'sakilacafe' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,945 INFO MySQL|dbserver2|snapshot Reading structure of database 'sink' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,948 INFO MySQL|dbserver2|snapshot Reading structure of database 'testcdc' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:18,953 INFO MySQL|dbserver2|snapshot Reading structure of database 'tpch' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:19,017 INFO MySQL|dbserver2|snapshot Reading structure of database 'wordpress' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:19,243 INFO MySQL|dbserver2|snapshot Reading structure of database 'wp' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:19,262 INFO MySQL|dbserver2|snapshot Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:19,277 INFO MySQL|dbserver2|snapshot Already applied 1 database changes [io.debezium.relational.history.SchemaHistoryMetrics] 2025-06-13 06:23:19,506 WARN || [Producer clientId=connector-producer-employee-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 4 : {dbserver2=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-13 06:23:20,088 INFO MySQL|dbserver2|snapshot Snapshot step 7 - Snapshotting data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:20,089 INFO MySQL|dbserver2|snapshot Creating snapshot worker pool with 1 worker thread(s) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:20,091 INFO MySQL|dbserver2|snapshot For table 'testcdc.employee' using select statement: 'SELECT `id`, `lastname`, `firstname`, `age` FROM `testcdc`.`employee`' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:20,095 INFO MySQL|dbserver2|snapshot Estimated row count for table testcdc.employee is OptionalLong[0] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:20,098 INFO MySQL|dbserver2|snapshot Exporting data from table 'testcdc.employee' (1 of 1 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:20,106 INFO MySQL|dbserver2|snapshot Finished exporting 1 records for table 'testcdc.employee' (1 of 1 tables); total duration '00:00:00.008' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:23:20,107 INFO MySQL|dbserver2|snapshot Releasing global read lock to enable MySQL writes [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:20,108 INFO MySQL|dbserver2|snapshot Writes to MySQL tables prevented for a total of 00:00:03.013 [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:23:20,109 INFO MySQL|dbserver2|snapshot Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-13 06:23:20,109 INFO MySQL|dbserver2|snapshot Snapshot completed [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-13 06:23:20,132 INFO MySQL|dbserver2|snapshot Snapshot ended with SnapshotResult [status=COMPLETED, offset=BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='binary-log.005358', currentBinlogPosition=768, currentRowNumber=0, serverId=0, sourceTime=2025-06-13T06:23:20Z, threadId=-1, currentQuery='null', tableIds=[testcdc.employee], databaseName='wp'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', currentGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', restartBinlogFilename='binary-log.005358', restartBinlogPosition=768, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]}] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:23:20,136 INFO MySQL|dbserver2|streaming Requested thread factory for component MySqlConnector, id = dbserver2 named = binlog-client [io.debezium.util.Threads] 2025-06-13 06:23:20,138 INFO MySQL|dbserver2|streaming Enable ssl PREFERRED mode for connector dbserver2 [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,147 INFO MySQL|dbserver2|streaming SignalProcessor started. Scheduling it every 5000ms [io.debezium.pipeline.signal.SignalProcessor] 2025-06-13 06:23:20,147 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-SignalProcessor [io.debezium.util.Threads] 2025-06-13 06:23:20,147 INFO MySQL|dbserver2|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:23:20,157 INFO MySQL|dbserver2|streaming GTID set purged on server: '695f5ece-137c-11f0-b4b4-020017156476:1-5138187' [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,157 INFO MySQL|dbserver2|streaming Attempting to generate a filtered GTID set [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,157 INFO MySQL|dbserver2|streaming GTID set from previous recorded offset: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,157 INFO MySQL|dbserver2|streaming GTID set available on server: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,158 INFO MySQL|dbserver2|streaming Using first available positions for new GTID channels [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,158 INFO MySQL|dbserver2|streaming Relevant GTID set available on server: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,160 INFO MySQL|dbserver2|streaming Final merged GTID set to use when connecting to MySQL: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:23:20,160 INFO MySQL|dbserver2|streaming Registering binlog reader with GTID set: '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,172 INFO MySQL|dbserver2|streaming Skip 0 events on streaming start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,172 INFO MySQL|dbserver2|streaming Skip 0 rows on streaming start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,173 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:23:20,177 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:23:20,225 INFO MySQL|dbserver2|binlog Connected to binlog at 10.0.1.6:3306, starting at BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='binary-log.005358', currentBinlogPosition=768, currentRowNumber=0, serverId=0, sourceTime=2025-06-13T06:23:20Z, threadId=-1, currentQuery='null', tableIds=[testcdc.employee], databaseName='wp'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', currentGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', restartBinlogFilename='binary-log.005358', restartBinlogPosition=768, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]} [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,225 INFO MySQL|dbserver2|streaming Waiting for keepalive thread to start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,226 INFO MySQL|dbserver2|binlog Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:23:20,325 INFO MySQL|dbserver2|streaming Keepalive thread is running [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:23:20,326 WARN || [Producer clientId=connector-producer-employee-connector-0] The metadata response from the cluster reported a recoverable issue with correlation id 100 : {dbserver2.testcdc.employee=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2025-06-13 06:23:21,496 INFO || 10.89.0.4 - - [13/Jun/2025:06:23:21 +0000] "GET /connectors/ HTTP/1.1" 200 45 "-" "curl/7.61.1" 6 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:21,512 INFO || 10.89.0.4 - - [13/Jun/2025:06:23:21 +0000] "GET /connectors/employee-connector/status HTTP/1.1" 200 172 "-" "curl/7.61.1" 8 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:23:21,523 INFO || 10.89.0.4 - - [13/Jun/2025:06:23:21 +0000] "GET /connectors/mysql-sink-connector/status HTTP/1.1" 200 284 "-" "curl/7.61.1" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:24:16,387 INFO || WorkerSourceTask{id=employee-connector-0} Committing offsets for 325 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask] 2025-06-13 06:25:36,522 INFO || Successfully processed removal of connector 'employee-connector' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-13 06:25:36,522 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector employee-connector config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,522 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,522 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,524 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:36 +0000] "DELETE /connectors/employee-connector HTTP/1.1" 204 0 "-" "curl/7.61.1" 12 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:36,525 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=6, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,528 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=6, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,530 INFO || Stopping connector employee-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,530 INFO || Scheduled shutdown for WorkerConnector{id=employee-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2025-06-13 06:25:36,530 INFO || Stopping task employee-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,530 INFO || Completed shutdown for WorkerConnector{id=employee-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2025-06-13 06:25:36,714 INFO || Stopping down connector [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:25:36,754 INFO MySQL|dbserver2|streaming Finished streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:25:36,755 INFO MySQL|dbserver2|binlog Stopped reading binlog after 10 events, last recorded offset: {ts_sec=1749795800, file=binary-log.005359, pos=198, gtids=695f5ece-137c-11f0-b4b4-020017156476:1-5138208, server_id=3316918221, event=1} [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:36,755 INFO || SignalProcessor stopped [io.debezium.pipeline.signal.SignalProcessor] 2025-06-13 06:25:36,756 INFO || Debezium ServiceRegistry stopped. [io.debezium.service.DefaultServiceRegistry] 2025-06-13 06:25:36,757 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:25:36,758 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:25:36,759 INFO || [Producer clientId=dbserver2-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2025-06-13 06:25:36,761 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,761 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,761 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,761 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,762 INFO || App info kafka.producer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:36,762 INFO || [Producer clientId=connector-producer-employee-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer] 2025-06-13 06:25:36,766 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,766 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,766 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,766 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,766 INFO || App info kafka.producer for connector-producer-employee-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:36,768 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,774 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,774 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 6 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=11, connectorIds=[mysql-sink-connector], taskIds=[mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2], revokedConnectorIds=[employee-connector], revokedTaskIds=[employee-connector-0], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,774 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 11 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,775 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,775 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,775 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,778 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=7, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,781 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=7, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,781 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 7 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=11, connectorIds=[mysql-sink-connector], taskIds=[mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,782 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 11 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,782 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,785 INFO || Successfully processed removal of connector 'mysql-sink-connector' [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2025-06-13 06:25:36,785 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector mysql-sink-connector config removed [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,786 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:36 +0000] "DELETE /connectors/mysql-sink-connector HTTP/1.1" 204 0 "-" "curl/7.61.1" 250 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:36,789 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,790 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,792 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=8, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,794 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=8, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,795 INFO || Stopping connector mysql-sink-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,795 INFO || Stopping task mysql-sink-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,795 INFO || Stopping task mysql-sink-connector-2 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,795 INFO || Stopping task mysql-sink-connector-1 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:36,795 INFO || Scheduled shutdown for WorkerConnector{id=mysql-sink-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2025-06-13 06:25:36,795 INFO || Closing session. [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:36,795 INFO || Closing session. [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:36,795 INFO || Closing the session factory [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:36,795 INFO || Closing session. [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:36,795 INFO || Completed shutdown for WorkerConnector{id=mysql-sink-connector} [org.apache.kafka.connect.runtime.WorkerConnector] 2025-06-13 06:25:36,795 INFO || Closing the session factory [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:36,795 INFO || Closing the session factory [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:36,807 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Member connector-consumer-mysql-sink-connector-1-1fbf92d1-8798-43d1-9159-f0a9dcf5e603 sending LeaveGroup request to coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,807 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Member connector-consumer-mysql-sink-connector-2-56af335f-570d-4423-b46e-e76d703cc476 sending LeaveGroup request to coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,808 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Member connector-consumer-mysql-sink-connector-0-062ba0b1-ec57-4557-9112-e9ea52c8d482 sending LeaveGroup request to coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,809 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:36,819 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,819 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,820 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,823 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,823 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,823 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,823 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:36,829 INFO || App info kafka.consumer for connector-consumer-mysql-sink-connector-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:36,831 INFO || App info kafka.consumer for connector-consumer-mysql-sink-connector-2 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:36,834 INFO || App info kafka.consumer for connector-consumer-mysql-sink-connector-0 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:36,835 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished stopping tasks in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,839 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished flushing status backing store in preparation for rebalance [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,839 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 8 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=13, connectorIds=[], taskIds=[], revokedConnectorIds=[mysql-sink-connector], revokedTaskIds=[mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,844 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 13 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,844 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,844 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,844 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,846 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=9, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,849 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=9, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:36,849 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 9 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=13, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,849 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 13 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:36,849 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,402 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:25:41,424 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:41,425 INFO || Successfully tested connection for jdbc:mysql://10.0.1.6:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'repl_s' [io.debezium.connector.binlog.BinlogConnector] 2025-06-13 06:25:41,428 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:25:41,429 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2025-06-13 06:25:41,434 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector employee-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,434 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,435 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,436 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:41 +0000] "POST /connectors/ HTTP/1.1" 201 640 "-" "curl/7.61.1" 39 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:41,437 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=10, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,440 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=10, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,440 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 10 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=14, connectorIds=[employee-connector], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,440 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 14 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,440 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connector employee-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,440 INFO || Creating connector employee-connector of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,440 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:25:41,440 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,441 INFO || Instantiated connector employee-connector with version 3.1.1.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,441 INFO || Finished creating connector employee-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,441 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,444 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:25:41,444 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,454 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Tasks [employee-connector-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,455 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,455 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,456 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=11, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,458 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=11, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,458 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 11 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=16, connectorIds=[employee-connector], taskIds=[employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,458 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 16 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,458 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task employee-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,458 INFO || Creating task employee-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,459 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = employee-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:25:41,459 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = null name = employee-connector predicates = [] tasks.max = 1 tasks.max.enforce = true transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,459 INFO || TaskConfig values: task.class = class io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:25:41,460 INFO || Instantiated task employee-connector-0 with version 3.1.1.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,460 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,460 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,460 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,460 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,460 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task employee-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,461 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,461 INFO || SourceConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2025-06-13 06:25:41,461 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.mysql.MySqlConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none exactly.once.support = requested header.converter = null key.converter = null name = employee-connector offsets.storage.topic = null predicates = [] tasks.max = 1 tasks.max.enforce = true topic.creation.groups = [] transaction.boundary = poll transaction.boundary.interval.ms = null transforms = [] value.converter = null [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,461 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 16384 bootstrap.servers = [kafka:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = connector-producer-employee-connector-0 compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 2147483647 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer linger.ms = 0 max.block.ms = 9223372036854775807 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:25:41,462 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,470 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:25:41,471 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,471 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,471 INFO || Kafka startTimeMs: 1749795941470 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,472 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,472 INFO || Starting MySqlConnectorTask with configuration: connector.class = io.debezium.connector.mysql.MySqlConnector snapshot.locking.mode = extended database.user = repl_s database.server.id = 1234 schema.history.internal.kafka.bootstrap.servers = kafka:9092 database.port = 3306 database.ssl.mode = preferred topic.prefix = dbserver2 schema.history.internal.kafka.topic = schema-changes.testcdc task.class = io.debezium.connector.mysql.MySqlConnectorTask database.hostname = 10.0.1.6 database.password = ******** name = employee-connector log.level = DEBUG table.include.list = testcdc.employee database.include.list = testcdc snapshot.mode = always [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:25:41,473 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:25:41,474 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2025-06-13 06:25:41,474 INFO || AbstractConfig values: [org.apache.kafka.common.config.AbstractConfig] 2025-06-13 06:25:41,480 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Connector mysql-sink-connector config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,484 INFO || [Producer clientId=connector-producer-employee-connector-0] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,484 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,484 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,486 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:41 +0000] "POST /connectors/ HTTP/1.1" 201 783 "-" "curl/7.61.1" 39 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:41,489 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=12, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,492 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=12, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,492 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 12 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=17, connectorIds=[mysql-sink-connector, employee-connector], taskIds=[employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,492 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 17 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,492 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connector mysql-sink-connector [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,493 INFO || Creating connector mysql-sink-connector of type io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,493 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:25:41,494 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,494 INFO || Instantiated connector mysql-sink-connector with version 3.1.1.Final of type class io.debezium.connector.jdbc.JdbcSinkConnector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,494 INFO || Finished creating connector mysql-sink-connector [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,494 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,495 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:25:41,495 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,508 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Tasks [mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,508 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,508 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,508 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:41,510 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully joined group with generation Generation{generationId=13, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,512 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Successfully synced group in generation Generation{generationId=13, memberId='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2025-06-13 06:25:41,512 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Joined group at generation 13 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-10.89.0.4:8083-f3adc88c-f6a4-4274-8f72-d32bffd4e1a2', leaderUrl='http://10.89.0.4:8083/', offset=21, connectorIds=[mysql-sink-connector, employee-connector], taskIds=[mysql-sink-connector-0, mysql-sink-connector-1, mysql-sink-connector-2, employee-connector-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,513 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting connectors and tasks using config offset 21 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,513 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,513 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,513 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Starting task mysql-sink-connector-2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,513 INFO || Creating task mysql-sink-connector-0 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,513 INFO || Creating task mysql-sink-connector-1 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,513 INFO || Creating task mysql-sink-connector-2 [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,514 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:25:41,514 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:25:41,514 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,514 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,514 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:25:41,515 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:25:41,515 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,515 INFO || Instantiated task mysql-sink-connector-2 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,515 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,515 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,515 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-2 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,515 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-2 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,515 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-2 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,515 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,516 INFO || ConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig] 2025-06-13 06:25:41,516 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,516 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,516 INFO || Instantiated task mysql-sink-connector-0 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,516 INFO || TaskConfig values: task.class = class io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.TaskConfig] 2025-06-13 06:25:41,516 INFO || New InternalSinkRecord class found [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,516 INFO || Instantiated task mysql-sink-connector-1 with version 3.1.1.Final of type io.debezium.connector.jdbc.JdbcSinkConnectorTask [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,516 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:25:41,517 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,517 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,517 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-1 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,517 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-1 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,517 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-1 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,517 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,517 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,517 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-2 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,518 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:25:41,518 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,518 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,518 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-1 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,519 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,516 INFO || JsonConverterConfig values: converter.type = key decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,519 INFO || JsonConverterConfig values: converter.type = value decimal.format = BASE64 replace.null.with.default = true schemas.cache.size = 1000 schemas.enable = true [org.apache.kafka.connect.json.JsonConverterConfig] 2025-06-13 06:25:41,519 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,519 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task mysql-sink-connector-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,519 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task mysql-sink-connector-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,520 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{} [org.apache.kafka.connect.runtime.Worker] 2025-06-13 06:25:41,520 INFO || SinkConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.SinkConnectorConfig] 2025-06-13 06:25:41,521 INFO || EnrichedConnectorConfig values: config.action.reload = restart connector.class = io.debezium.connector.jdbc.JdbcSinkConnector errors.deadletterqueue.context.headers.enable = false errors.deadletterqueue.topic.name = errors.deadletterqueue.topic.replication.factor = 3 errors.log.enable = false errors.log.include.messages = false errors.retry.delay.max.ms = 60000 errors.retry.timeout = 0 errors.tolerance = none header.converter = null key.converter = class org.apache.kafka.connect.json.JsonConverter name = mysql-sink-connector predicates = [] tasks.max = 3 tasks.max.enforce = true topics = [] topics.regex = dbserver2.testcdc.* transforms = [] value.converter = class org.apache.kafka.connect.json.JsonConverter [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2025-06-13 06:25:41,521 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = connector-consumer-mysql-sink-connector-0 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = connect-mysql-sink-connector group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,521 INFO || Found previous partition offset BinlogPartition{serverName='dbserver2'} io.debezium.connector.mysql.MySqlPartition@25324330: {file=binary-log.005358, pos=768, gtids=695f5ece-137c-11f0-b4b4-020017156476:1-5138208} [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:25:41,522 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,525 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,525 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,525 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,525 INFO || Kafka startTimeMs: 1749795941525 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,526 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:25:41,527 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,527 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,528 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,528 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,528 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,528 INFO || Kafka startTimeMs: 1749795941528 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,529 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:25:41,529 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,529 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,529 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,529 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,529 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,530 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:25:41,531 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=dbserver2-schemahistory, bootstrap.servers=kafka:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=dbserver2-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:25:41,531 INFO || KafkaSchemaHistory Producer config: {enable.idempotence=false, value.serializer=org.apache.kafka.common.serialization.StringSerializer, batch.size=32768, bootstrap.servers=kafka:9092, max.in.flight.requests.per.connection=1, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=dbserver2-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:25:41,531 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = db-history-config-check [io.debezium.util.Threads] 2025-06-13 06:25:41,532 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:25:41,532 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:25:41,532 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,532 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,532 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,532 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,533 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:25:41,533 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,533 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,533 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,533 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,533 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,534 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,534 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,534 INFO || Kafka startTimeMs: 1749795941533 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,534 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Subscribed to pattern: 'dbserver2.testcdc.*' [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:25:41,535 ERROR || The 'collection.name.format' value is invalid: Warning: Using deprecated config option "table.name.format". [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || Starting JdbcSinkConnectorConfig with configuration: [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || connector.class = io.debezium.connector.jdbc.JdbcSinkConnector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || table.name.format = ${topic} [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || primary.key.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || connection.password = ******** [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || tasks.max = 3 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || connection.username = repl_t [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || quote.identifiers = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || heartbeat.interval.ms = 3000 [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || topics.regex = dbserver2.testcdc.* [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || autoReconnect = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || delete.enabled = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || auto.evolve = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || schema.evolution = basic [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || task.class = io.debezium.connector.jdbc.JdbcSinkConnectorTask [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || name = mysql-sink-connector [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || log.level = DEBUG [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || auto.create = true [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || connection.url = jdbc:mysql://10.0.0.142:3306/sink [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || insert.mode = upsert [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || pk.mode = record_key [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,535 INFO || pk.fields = id [io.debezium.connector.jdbc.JdbcSinkConnectorTask] 2025-06-13 06:25:41,537 INFO || HHH000026: Second-level cache disabled [org.hibernate.cache.internal.RegionFactoryInitiator] 2025-06-13 06:25:41,538 INFO || HHH000130: Instantiating explicit connection provider: org.hibernate.c3p0.internal.C3P0ConnectionProvider [org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator] 2025-06-13 06:25:41,538 INFO || HHH010002: C3P0 using driver: null at URL: jdbc:mysql://10.0.0.142:3306/sink [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,539 INFO || HHH10001001: Connection properties: {password=****, user=repl_t} [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,539 INFO || [Worker clientId=connect-10.89.0.4:8083, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2025-06-13 06:25:41,539 INFO || HHH10001003: Autocommit mode: false [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,539 WARN || HHH10001006: No JDBC Driver class was specified by property `jakarta.persistence.jdbc.driver`, `hibernate.driver` or `javax.persistence.jdbc.driver` [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,539 INFO || ProducerConfig values: acks = -1 auto.include.jmx.reporter = true batch.size = 32768 bootstrap.servers = [kafka:9092] buffer.memory = 1048576 client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory compression.gzip.level = -1 compression.lz4.level = 9 compression.type = none compression.zstd.level = 3 connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = false enable.metrics.push = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 1 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer [org.apache.kafka.clients.producer.ProducerConfig] 2025-06-13 06:25:41,539 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,546 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,546 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,546 INFO || Kafka startTimeMs: 1749795941546 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,549 INFO || [Producer clientId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,572 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,574 INFO || Using 'SHOW BINARY LOG STATUS' to get binary log status [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:41,580 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:25:41,583 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,587 INFO || HHH10001007: JDBC isolation level: [org.hibernate.orm.connections.pooling.c3p0] 2025-06-13 06:25:41,590 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@2fbd1942 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@4a18078 [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|4c602009, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@589c46d8 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|2098aab5, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|6c908dc9, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:25:41,597 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2025-06-13 06:25:41,598 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,598 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,601 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,601 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,601 INFO || Kafka startTimeMs: 1749795941601 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,603 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,606 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,606 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,607 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,607 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,607 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,607 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,609 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,609 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,610 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,614 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,614 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,614 INFO || Kafka startTimeMs: 1749795941614 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,614 INFO || Creating thread debezium-mysqlconnector-dbserver2-db-history-config-check [io.debezium.util.Threads] 2025-06-13 06:25:41,618 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@72d92ff2 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@2fb634c [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|2b8cc35f, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@e3267018 [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|7fb159d2, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|264f22, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:25:41,643 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,645 INFO || AdminClientConfig values: auto.include.jmx.reporter = true bootstrap.controllers = [] bootstrap.servers = [kafka:9092] client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory-topic-check connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 enable.metrics.push = true metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:25:41,647 INFO || These configurations '[enable.idempotence, value.serializer, batch.size, max.in.flight.requests.per.connection, buffer.memory, key.serializer]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2025-06-13 06:25:41,647 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,647 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,647 INFO || Kafka startTimeMs: 1749795941647 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,647 INFO || Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource@dc7cafb8 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@6f192bfa [ acquireIncrement -> 32, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2vcydgbbsue9u6e525yw|23e3f78f, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource@84b468d [ description -> null, driverClass -> null, factoryClassLocation -> null, forceUseNamedDriverClass -> false, identityToken -> 2vcydgbbsue9u6e525yw|3e1dc889, jdbcUrl -> jdbc:mysql://10.0.0.142:3306/sink, properties -> {password=******, user=******} ], preferredTestQuery -> null, privilegeSpawnedThreads -> false, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, extensions -> {}, factoryClassLocation -> null, identityToken -> 2vcydgbbsue9u6e525yw|1e04392d, numHelperThreads -> 3 ] [com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource] 2025-06-13 06:25:41,650 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,650 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,651 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,651 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,651 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,651 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,653 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,660 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:25:41,662 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:25:41,681 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:25:41,701 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:25:41,702 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:25:41,708 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:41,708 INFO || WorkerSinkTask{id=mysql-sink-connector-2} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,710 INFO || WorkerSinkTask{id=mysql-sink-connector-2} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,710 INFO || GTID Set retained: '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' [io.debezium.connector.binlog.jdbc.BinlogConnectorConnection] 2025-06-13 06:25:41,711 INFO || The current GTID set '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' does not contain the GTID set '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' required by the connector [io.debezium.connector.binlog.jdbc.BinlogConnectorConnection] 2025-06-13 06:25:41,714 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,714 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,722 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:25:41,724 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:41,724 INFO || WorkerSinkTask{id=mysql-sink-connector-1} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,725 INFO || WorkerSinkTask{id=mysql-sink-connector-1} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,727 INFO || Server has already purged '695f5ece-137c-11f0-b4b4-020017156476:1-5138187' GTIDs [io.debezium.connector.binlog.jdbc.BinlogConnectorConnection] 2025-06-13 06:25:41,728 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,732 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,732 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,738 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,743 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,743 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,743 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,744 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,745 INFO || GTIDs known by the server but not processed yet '', for replication are available only '' [io.debezium.connector.binlog.jdbc.BinlogConnectorConnection] 2025-06-13 06:25:41,746 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=4, memberId='connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,746 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=4, memberId='connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,748 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,752 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,751 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Finished assignment for group at generation 4: {connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1=Assignment(partitions=[dbserver2.testcdc.employee-0]), connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075=Assignment(partitions=[])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,758 INFO || HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) [org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator] 2025-06-13 06:25:41,759 INFO || Using dialect io.debezium.connector.jdbc.dialect.mysql.MySqlDatabaseDialect [io.debezium.connector.jdbc.dialect.DatabaseDialectResolver] 2025-06-13 06:25:41,763 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=4, memberId='connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,763 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,764 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:41,764 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=4, memberId='connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,764 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[dbserver2.testcdc.employee-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,764 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: dbserver2.testcdc.employee-0 [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:41,763 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,765 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,765 INFO || Kafka startTimeMs: 1749795941763 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,771 INFO || Database TimeZone: SYSTEM (global), SYSTEM (system) [io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect] 2025-06-13 06:25:41,772 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Found no committed offset for partition dbserver2.testcdc.employee-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,773 INFO || Database schema history topic 'schema-changes.testcdc' has correct settings [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2025-06-13 06:25:41,773 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,773 INFO || Database version 9.1.0 [io.debezium.connector.jdbc.JdbcChangeEventSink] 2025-06-13 06:25:41,773 INFO || WorkerSinkTask{id=mysql-sink-connector-0} Sink task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,773 INFO || WorkerSinkTask{id=mysql-sink-connector-0} Executing sink task [org.apache.kafka.connect.runtime.WorkerSinkTask] 2025-06-13 06:25:41,773 INFO || App info kafka.admin.client for dbserver2-schemahistory-topic-check unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,776 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,776 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,777 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,777 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,777 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,777 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,778 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Resetting offset for partition dbserver2.testcdc.employee-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:25:41,778 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,778 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,778 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,780 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,780 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,780 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,783 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,783 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,783 INFO || Kafka startTimeMs: 1749795941783 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,792 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,792 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,801 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,802 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,805 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,806 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,806 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Request joining group due to: need to re-join with the given member-id: connector-consumer-mysql-sink-connector-0-cfdac19a-cdbc-4726-adfc-39843a7ec39f [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,806 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,806 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,806 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,806 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,807 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:41,809 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,809 INFO || Started database schema history recovery [io.debezium.relational.history.SchemaHistoryMetrics] 2025-06-13 06:25:41,811 INFO || ConsumerConfig values: allow.auto.create.topics = true auto.commit.interval.ms = 5000 auto.include.jmx.reporter = true auto.offset.reset = earliest bootstrap.servers = [kafka:9092] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dbserver2-schemahistory client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = false enable.metrics.push = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = dbserver2-schemahistory group.instance.id = null group.protocol = classic group.remote.assignor = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metadata.recovery.strategy = none metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.max.ms = 1000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = GSSAPI sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.header.urlencode = false sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 10000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [org.apache.kafka.clients.consumer.ConsumerConfig] 2025-06-13 06:25:41,811 INFO || initializing Kafka metrics collector [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector] 2025-06-13 06:25:41,815 INFO || Kafka version: 3.9.0 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,815 INFO || Kafka commitId: a60e31147e6b01ee [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,815 INFO || Kafka startTimeMs: 1749795941815 [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:41,815 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Subscribed to topic(s): schema-changes.testcdc [org.apache.kafka.clients.consumer.internals.ClassicKafkaConsumer] 2025-06-13 06:25:41,818 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Cluster ID: NNtE4sSuQo-kXgAZqjN_KA [org.apache.kafka.clients.Metadata] 2025-06-13 06:25:41,823 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Discovered group coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,824 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,836 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: need to re-join with the given member-id: dbserver2-schemahistory-1719fe86-f1d9-4b6a-a341-9282fcb41bc2 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,836 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,838 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Successfully joined group with generation Generation{generationId=1, memberId='dbserver2-schemahistory-1719fe86-f1d9-4b6a-a341-9282fcb41bc2', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,838 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Finished assignment for group at generation 1: {dbserver2-schemahistory-1719fe86-f1d9-4b6a-a341-9282fcb41bc2=Assignment(partitions=[schema-changes.testcdc-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,840 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Successfully synced group in generation Generation{generationId=1, memberId='dbserver2-schemahistory-1719fe86-f1d9-4b6a-a341-9282fcb41bc2', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,840 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Notifying assignor about the new Assignment(partitions=[schema-changes.testcdc-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,840 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Adding newly assigned partitions: schema-changes.testcdc-0 [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:41,841 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Found no committed offset for partition schema-changes.testcdc-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:41,842 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting offset for partition schema-changes.testcdc-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:25:42,075 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Revoke previously assigned partitions schema-changes.testcdc-0 [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:42,075 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Member dbserver2-schemahistory-1719fe86-f1d9-4b6a-a341-9282fcb41bc2 sending LeaveGroup request to coordinator 10.89.0.3:9092 (id: 2147483646 rack: null) due to the consumer is being closed [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:42,075 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:42,075 INFO || [Consumer clientId=dbserver2-schemahistory, groupId=dbserver2-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:42,374 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:42,374 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:42,375 INFO || Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:42,375 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2025-06-13 06:25:42,376 INFO || App info kafka.consumer for dbserver2-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2025-06-13 06:25:42,376 INFO || Finished database schema history recovery of 324 change(s) in 567 ms [io.debezium.relational.history.SchemaHistoryMetrics] 2025-06-13 06:25:42,377 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:25:42,378 INFO || Found previous offset BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='binary-log.005358', currentBinlogPosition=768, currentRowNumber=0, serverId=0, sourceTime=null, threadId=-1, currentQuery='null', tableIds=[], databaseName='null'}, snapshotCompleted=false, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', currentGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', restartBinlogFilename='binary-log.005358', restartBinlogPosition=768, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]} [io.debezium.connector.mysql.MySqlConnectorTask] 2025-06-13 06:25:42,379 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = SignalProcessor [io.debezium.util.Threads] 2025-06-13 06:25:42,380 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = change-event-source-coordinator [io.debezium.util.Threads] 2025-06-13 06:25:42,380 INFO || Requested thread factory for component MySqlConnector, id = dbserver2 named = blocking-snapshot [io.debezium.util.Threads] 2025-06-13 06:25:42,380 INFO || Creating thread debezium-mysqlconnector-dbserver2-change-event-source-coordinator [io.debezium.util.Threads] 2025-06-13 06:25:42,380 INFO || WorkerSourceTask{id=employee-connector-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2025-06-13 06:25:42,382 INFO MySQL|dbserver2|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:25:42,382 INFO MySQL|dbserver2|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:25:42,382 INFO MySQL|dbserver2|snapshot A previous offset indicating a completed snapshot has been found. [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,383 INFO MySQL|dbserver2|snapshot Snapshot mode is set to ALWAYS, not checking exiting offset. [io.debezium.snapshot.mode.AlwaysSnapshotter] 2025-06-13 06:25:42,383 INFO MySQL|dbserver2|snapshot According to the connector configuration both schema and data will be snapshot. [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,383 INFO MySQL|dbserver2|snapshot Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,384 INFO MySQL|dbserver2|snapshot Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,384 INFO MySQL|dbserver2|snapshot Read list of available databases [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,385 INFO MySQL|dbserver2|snapshot list of available databases is: [afc, airport, airportdb, information_schema, lakehouse, llm, mydb, mysql, mysql_audit, mysql_option, mysql_task_management, pawn, pawn2, pawn3, performance_schema, ryan, sakilacafe, sink, sys, testcdc, thaidb, tpch, wordpress, wp] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,386 INFO MySQL|dbserver2|snapshot Read list of available tables in each database [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot snapshot continuing with database(s): [testcdc] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table mysql_audit.audit_log_filter to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table testcdc.employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.product to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_groups to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table airport.baggage to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_downloadable_product_permissions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn2.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table airportdb.flight_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table afc.budget to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_users to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table llm.ryan_vector to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_postmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wp.wp_usermeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn2.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn3.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table llm.b2c6eba9f6e15b53039ed17e740aedce to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_actions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_category_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table sink.dbserver2_testcdc_employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_commentmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table airportdb.airplane to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_admin_note_actions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table airport.flights to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn2.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_rate_limits to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table airportdb.airline to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn3.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table llm.FAQ_PCB to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn3.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn3.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table pawn.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table mysql_option.option_usage to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,407 INFO MySQL|dbserver2|snapshot Adding table tpch.customer to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_payment_tokenmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport_reachable to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_download_directories to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wt_iew_mapping_template to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.branch to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table pawn2.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_logs to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table llm.t1 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_id_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table airportdb.employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wp.wp_woocommerce_order_itemmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table mydb.web_embeddings_trx to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table afc.class to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_posts to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_terms to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table mysql_task_management.task_log_impl to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_jetpack_sync_queue to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wp.wp_woocommerce_order_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_admin_notes to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table llm.policy to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table airportdb.weatherdata to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wt_iew_action_history to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_comments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table llm.web_embeddings_trx to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_links to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_termmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table airportdb.airplane_type to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table tpch.lineitem to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table tpch.region to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_api_keys to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table pawn.repayments to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_attributes_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_tax_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table llm.web_embeddings to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_coupon_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_product_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_reserved_stock to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table tpch.part to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table airport.passengers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_addresses to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table pawn2.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,408 INFO MySQL|dbserver2|snapshot Adding table pawn2.branches to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table ryan.t2 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_term_taxonomy to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airport.cargo to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table ryan.t1 to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_sessions to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_operational_data to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table llm.shadab to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table tpch.partsupp to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table sakilacafe.sales to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_tax_rate_locations to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_customer_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table mydb.web_embeddings to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_actionscheduler_claims to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_payment_tokens to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.booking to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_usermeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table tpch.orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table pawn3.forfeited_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.passengerdetails to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_users to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders_addresses to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_attribute_taxonomies to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.passenger to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_download_log to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_orders_meta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.flight to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zone_locations to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_product_meta_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport_geo to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_options to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zones to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table tpch.nation to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_order_itemmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table pawn.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_term_relationships to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_orders_operational_data to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_tax_rate_classes to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_webhooks to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table tpch.supplier to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_shipping_zone_methods to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table pawn3.customers to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_order_items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table lakehouse.test to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_woocommerce_tax_rates to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table pawn.loans to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_wc_customer_lookup to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.airport to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_postmeta to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table mysql_audit.audit_log_user to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wordpress.wp_wc_order_stats to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table airportdb.flightschedule to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table pawn.items to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,409 INFO MySQL|dbserver2|snapshot Adding table wp.wp_posts to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,410 INFO MySQL|dbserver2|snapshot Created connection pool with 1 threads [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,410 INFO MySQL|dbserver2|snapshot Snapshot step 3 - Locking captured tables [testcdc.employee] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,411 INFO MySQL|dbserver2|snapshot Flush and obtain global read lock to prevent writes to database [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,413 INFO MySQL|dbserver2|snapshot Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,413 INFO MySQL|dbserver2|snapshot Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:42,413 INFO MySQL|dbserver2|snapshot All eligible tables schema should be captured, capturing: [afc.budget, afc.class, airport.baggage, airport.cargo, airport.flights, airport.passengers, airportdb.airline, airportdb.airplane, airportdb.airplane_type, airportdb.airport, airportdb.airport_geo, airportdb.airport_reachable, airportdb.booking, airportdb.employee, airportdb.flight, airportdb.flight_log, airportdb.flightschedule, airportdb.passenger, airportdb.passengerdetails, airportdb.weatherdata, lakehouse.test, llm.FAQ_PCB, llm.b2c6eba9f6e15b53039ed17e740aedce, llm.policy, llm.ryan_vector, llm.shadab, llm.t1, llm.web_embeddings, llm.web_embeddings_trx, mydb.web_embeddings, mydb.web_embeddings_trx, mysql_audit.audit_log_filter, mysql_audit.audit_log_user, mysql_option.option_usage, mysql_task_management.task_id_impl, mysql_task_management.task_impl, mysql_task_management.task_log_impl, pawn.branches, pawn.customers, pawn.forfeited_items, pawn.items, pawn.loans, pawn.repayments, pawn2.branches, pawn2.customers, pawn2.forfeited_items, pawn2.items, pawn2.loans, pawn2.repayments, pawn3.branches, pawn3.customers, pawn3.forfeited_items, pawn3.items, pawn3.loans, pawn3.repayments, ryan.t1, ryan.t2, sakilacafe.branch, sakilacafe.product, sakilacafe.sales, sink.dbserver2_testcdc_employee, testcdc.employee, tpch.customer, tpch.lineitem, tpch.nation, tpch.orders, tpch.part, tpch.partsupp, tpch.region, tpch.supplier, wordpress.wp_actionscheduler_actions, wordpress.wp_actionscheduler_claims, wordpress.wp_actionscheduler_groups, wordpress.wp_actionscheduler_logs, wordpress.wp_commentmeta, wordpress.wp_comments, wordpress.wp_jetpack_sync_queue, wordpress.wp_links, wordpress.wp_options, wordpress.wp_postmeta, wordpress.wp_posts, wordpress.wp_term_relationships, wordpress.wp_term_taxonomy, wordpress.wp_termmeta, wordpress.wp_terms, wordpress.wp_usermeta, wordpress.wp_users, wordpress.wp_wc_admin_note_actions, wordpress.wp_wc_admin_notes, wordpress.wp_wc_category_lookup, wordpress.wp_wc_customer_lookup, wordpress.wp_wc_download_log, wordpress.wp_wc_order_addresses, wordpress.wp_wc_order_coupon_lookup, wordpress.wp_wc_order_operational_data, wordpress.wp_wc_order_product_lookup, wordpress.wp_wc_order_stats, wordpress.wp_wc_order_tax_lookup, wordpress.wp_wc_orders, wordpress.wp_wc_orders_meta, wordpress.wp_wc_product_attributes_lookup, wordpress.wp_wc_product_download_directories, wordpress.wp_wc_product_meta_lookup, wordpress.wp_wc_rate_limits, wordpress.wp_wc_reserved_stock, wordpress.wp_wc_tax_rate_classes, wordpress.wp_wc_webhooks, wordpress.wp_woocommerce_api_keys, wordpress.wp_woocommerce_attribute_taxonomies, wordpress.wp_woocommerce_downloadable_product_permissions, wordpress.wp_woocommerce_log, wordpress.wp_woocommerce_order_itemmeta, wordpress.wp_woocommerce_order_items, wordpress.wp_woocommerce_payment_tokenmeta, wordpress.wp_woocommerce_payment_tokens, wordpress.wp_woocommerce_sessions, wordpress.wp_woocommerce_shipping_zone_locations, wordpress.wp_woocommerce_shipping_zone_methods, wordpress.wp_woocommerce_shipping_zones, wordpress.wp_woocommerce_tax_rate_locations, wordpress.wp_woocommerce_tax_rates, wordpress.wp_wt_iew_action_history, wordpress.wp_wt_iew_mapping_template, wp.wp_postmeta, wp.wp_posts, wp.wp_usermeta, wp.wp_users, wp.wp_wc_customer_lookup, wp.wp_wc_orders, wp.wp_wc_orders_addresses, wp.wp_wc_orders_operational_data, wp.wp_woocommerce_order_itemmeta, wp.wp_woocommerce_order_items] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,445 INFO MySQL|dbserver2|snapshot Reading structure of database 'afc' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,451 INFO MySQL|dbserver2|snapshot Reading structure of database 'airport' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,459 INFO MySQL|dbserver2|snapshot Reading structure of database 'airportdb' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,526 INFO MySQL|dbserver2|snapshot Reading structure of database 'lakehouse' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,529 INFO MySQL|dbserver2|snapshot Reading structure of database 'llm' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,545 INFO MySQL|dbserver2|snapshot Reading structure of database 'mydb' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,549 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_audit' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,556 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_option' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,560 INFO MySQL|dbserver2|snapshot Reading structure of database 'mysql_task_management' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,571 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,582 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn2' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,602 INFO MySQL|dbserver2|snapshot Reading structure of database 'pawn3' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,637 INFO MySQL|dbserver2|snapshot Reading structure of database 'ryan' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,644 INFO MySQL|dbserver2|snapshot Reading structure of database 'sakilacafe' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,649 INFO MySQL|dbserver2|snapshot Reading structure of database 'sink' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,652 INFO MySQL|dbserver2|snapshot Reading structure of database 'testcdc' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,655 INFO MySQL|dbserver2|snapshot Reading structure of database 'tpch' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,675 INFO MySQL|dbserver2|snapshot Reading structure of database 'wordpress' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,855 INFO MySQL|dbserver2|snapshot Reading structure of database 'wp' [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:42,872 INFO MySQL|dbserver2|snapshot Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,276 INFO MySQL|dbserver2|snapshot Snapshot step 7 - Snapshotting data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,277 INFO MySQL|dbserver2|snapshot Creating snapshot worker pool with 1 worker thread(s) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,277 INFO MySQL|dbserver2|snapshot For table 'testcdc.employee' using select statement: 'SELECT `id`, `lastname`, `firstname`, `age` FROM `testcdc`.`employee`' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,280 INFO MySQL|dbserver2|snapshot Estimated row count for table testcdc.employee is OptionalLong[0] [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:43,280 INFO MySQL|dbserver2|snapshot Exporting data from table 'testcdc.employee' (1 of 1 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,282 INFO MySQL|dbserver2|snapshot Finished exporting 1 records for table 'testcdc.employee' (1 of 1 tables); total duration '00:00:00.002' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2025-06-13 06:25:43,283 INFO MySQL|dbserver2|snapshot Releasing global read lock to enable MySQL writes [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:43,283 INFO MySQL|dbserver2|snapshot Writes to MySQL tables prevented for a total of 00:00:00.87 [io.debezium.connector.binlog.BinlogSnapshotChangeEventSource] 2025-06-13 06:25:43,284 INFO MySQL|dbserver2|snapshot Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-13 06:25:43,284 INFO MySQL|dbserver2|snapshot Snapshot completed [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2025-06-13 06:25:43,284 INFO MySQL|dbserver2|snapshot Snapshot ended with SnapshotResult [status=COMPLETED, offset=BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='binary-log.005358', currentBinlogPosition=768, currentRowNumber=0, serverId=0, sourceTime=2025-06-13T06:25:43Z, threadId=-1, currentQuery='null', tableIds=[testcdc.employee], databaseName='wp'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', currentGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', restartBinlogFilename='binary-log.005358', restartBinlogPosition=768, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]}] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:25:43,284 INFO MySQL|dbserver2|streaming Requested thread factory for component MySqlConnector, id = dbserver2 named = binlog-client [io.debezium.util.Threads] 2025-06-13 06:25:43,286 INFO MySQL|dbserver2|streaming Enable ssl PREFERRED mode for connector dbserver2 [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,289 INFO MySQL|dbserver2|streaming SignalProcessor started. Scheduling it every 5000ms [io.debezium.pipeline.signal.SignalProcessor] 2025-06-13 06:25:43,289 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-SignalProcessor [io.debezium.util.Threads] 2025-06-13 06:25:43,289 INFO MySQL|dbserver2|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming GTID set purged on server: '695f5ece-137c-11f0-b4b4-020017156476:1-5138187' [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming Attempting to generate a filtered GTID set [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming GTID set from previous recorded offset: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming GTID set available on server: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming Using first available positions for new GTID channels [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming Relevant GTID set available on server: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming Final merged GTID set to use when connecting to MySQL: 695f5ece-137c-11f0-b4b4-020017156476:1-5138208 [io.debezium.connector.mysql.jdbc.MySqlConnection] 2025-06-13 06:25:43,291 INFO MySQL|dbserver2|streaming Registering binlog reader with GTID set: '695f5ece-137c-11f0-b4b4-020017156476:1-5138208' [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,292 INFO MySQL|dbserver2|streaming Skip 0 events on streaming start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,292 INFO MySQL|dbserver2|streaming Skip 0 rows on streaming start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,292 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:25:43,294 INFO MySQL|dbserver2|streaming Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:25:43,307 INFO MySQL|dbserver2|binlog Connected to binlog at 10.0.1.6:3306, starting at BinlogOffsetContext{sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=BinlogSourceInfo{currentGtid='null', currentBinlogFilename='binary-log.005358', currentBinlogPosition=768, currentRowNumber=0, serverId=0, sourceTime=2025-06-13T06:25:43Z, threadId=-1, currentQuery='null', tableIds=[testcdc.employee], databaseName='wp'}, snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', currentGtidSet='695f5ece-137c-11f0-b4b4-020017156476:1-5138208', restartBinlogFilename='binary-log.005358', restartBinlogPosition=768, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId='null', incrementalSnapshotContext=IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]} [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,307 INFO MySQL|dbserver2|streaming Waiting for keepalive thread to start [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:43,307 INFO MySQL|dbserver2|binlog Creating thread debezium-mysqlconnector-dbserver2-binlog-client [io.debezium.util.Threads] 2025-06-13 06:25:43,407 INFO MySQL|dbserver2|streaming Keepalive thread is running [io.debezium.connector.binlog.BinlogStreamingChangeEventSource] 2025-06-13 06:25:44,747 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Request joining group due to: group is already rebalancing [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,748 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Request joining group due to: group is already rebalancing [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,749 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Revoke previously assigned partitions [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:44,749 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,964 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Revoke previously assigned partitions dbserver2.testcdc.employee-0 [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:44,965 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] (Re-)joining group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,967 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,967 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-0-cfdac19a-cdbc-4726-adfc-39843a7ec39f', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,968 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully joined group with generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,968 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Finished assignment for group at generation 5: {connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1=Assignment(partitions=[]), connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075=Assignment(partitions=[]), connector-consumer-mysql-sink-connector-0-cfdac19a-cdbc-4726-adfc-39843a7ec39f=Assignment(partitions=[dbserver2.testcdc.employee-0])} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-2-69611530-7420-49d6-b76a-e0702c430075', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-1-ce209f88-357a-4d4e-9b37-eac3cfc6dfe1', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Successfully synced group in generation Generation{generationId=5, memberId='connector-consumer-mysql-sink-connector-0-cfdac19a-cdbc-4726-adfc-39843a7ec39f', protocol='range'} [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-2, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-1, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Notifying assignor about the new Assignment(partitions=[dbserver2.testcdc.employee-0]) [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,970 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Adding newly assigned partitions: dbserver2.testcdc.employee-0 [org.apache.kafka.clients.consumer.internals.ConsumerRebalanceListenerInvoker] 2025-06-13 06:25:44,971 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Found no committed offset for partition dbserver2.testcdc.employee-0 [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2025-06-13 06:25:44,971 INFO || [Consumer clientId=connector-consumer-mysql-sink-connector-0, groupId=connect-mysql-sink-connector] Resetting offset for partition dbserver2.testcdc.employee-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[10.89.0.3:9092 (id: 1 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2025-06-13 06:25:46,989 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:46 +0000] "GET /connectors/ HTTP/1.1" 200 45 "-" "curl/7.61.1" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:47,001 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:46 +0000] "GET /connectors/employee-connector/status HTTP/1.1" 200 172 "-" "curl/7.61.1" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:25:47,011 INFO || 10.89.0.4 - - [13/Jun/2025:06:25:47 +0000] "GET /connectors/mysql-sink-connector/status HTTP/1.1" 200 284 "-" "curl/7.61.1" 2 [org.apache.kafka.connect.runtime.rest.RestServer] 2025-06-13 06:26:41,472 INFO || WorkerSourceTask{id=employee-connector-0} Committing offsets for 325 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask] 2025-06-13 06:27:00,598 INFO || 326 records sent during previous 00:01:19.138, last recorded offset of {server=dbserver2} partition is {ts_sec=1749796020, file=binary-log.005359, pos=277, gtids=695f5ece-137c-11f0-b4b4-020017156476:1-5138208, row=1, server_id=3316918221, event=3} [io.debezium.connector.common.BaseSourceTask] 2025-06-13 06:27:41,474 INFO || WorkerSourceTask{id=employee-connector-0} Committing offsets for 2 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask] 2025-06-13 06:28:41,476 INFO || WorkerSourceTask{id=employee-connector-0} Committing offsets for 2 acknowledged messages [org.apache.kafka.connect.runtime.WorkerSourceTask]