2024-02-14 11:50:32 Using BOOTSTRAP_SERVERS=kafka:29092 2024-02-14 11:50:32 Plugins are loaded from /kafka/connect 2024-02-14 11:50:32 Using the following environment variables: 2024-02-14 11:50:32 GROUP_ID=1 2024-02-14 11:50:32 CONFIG_STORAGE_TOPIC=my_connect_configs 2024-02-14 11:50:32 OFFSET_STORAGE_TOPIC=my_connect_offsets 2024-02-14 11:50:32 STATUS_STORAGE_TOPIC=my_connect_statuses 2024-02-14 11:50:32 BOOTSTRAP_SERVERS=kafka:29092 2024-02-14 11:50:32 REST_HOST_NAME=192.168.0.6 2024-02-14 11:50:32 REST_PORT=8083 2024-02-14 11:50:32 ADVERTISED_HOST_NAME=192.168.0.6 2024-02-14 11:50:32 ADVERTISED_PORT=8083 2024-02-14 11:50:32 KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:32 VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:32 OFFSET_FLUSH_INTERVAL_MS=60000 2024-02-14 11:50:32 OFFSET_FLUSH_TIMEOUT_MS=5000 2024-02-14 11:50:32 SHUTDOWN_TIMEOUT=10000 2024-02-14 11:50:32 --- Setting property from CONNECT_REST_ADVERTISED_PORT: rest.advertised.port=8083 2024-02-14 11:50:32 --- Setting property from CONNECT_OFFSET_STORAGE_TOPIC: offset.storage.topic=my_connect_offsets 2024-02-14 11:50:32 --- Setting property from CONNECT_KEY_CONVERTER: key.converter=org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:32 --- Setting property from CONNECT_CONFIG_STORAGE_TOPIC: config.storage.topic=my_connect_configs 2024-02-14 11:50:32 --- Setting property from CONNECT_GROUP_ID: group.id=1 2024-02-14 11:50:33 --- Setting property from CONNECT_REST_ADVERTISED_HOST_NAME: rest.advertised.host.name=192.168.0.6 2024-02-14 11:50:33 --- Setting property from CONNECT_REST_HOST_NAME: rest.host.name=192.168.0.6 2024-02-14 11:50:33 --- Setting property from CONNECT_VALUE_CONVERTER: value.converter=org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:33 --- Setting property from CONNECT_REST_PORT: rest.port=8083 2024-02-14 11:50:33 --- Setting property from CONNECT_STATUS_STORAGE_TOPIC: status.storage.topic=my_connect_statuses 2024-02-14 11:50:33 --- Setting property from CONNECT_OFFSET_FLUSH_TIMEOUT_MS: offset.flush.timeout.ms=5000 2024-02-14 11:50:33 --- Setting property from CONNECT_PLUGIN_PATH: plugin.path=/kafka/connect 2024-02-14 11:50:33 --- Setting property from CONNECT_OFFSET_FLUSH_INTERVAL_MS: offset.flush.interval.ms=60000 2024-02-14 11:50:33 --- Setting property from CONNECT_BOOTSTRAP_SERVERS: bootstrap.servers=kafka:29092 2024-02-14 11:50:33 --- Setting property from CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: task.shutdown.graceful.timeout.ms=10000 2024-02-14 11:50:34 2024-02-14 10:50:34,120 INFO || Kafka Connect worker initializing ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-02-14 11:50:34 2024-02-14 10:50:34,122 INFO || WorkerInfo values: 2024-02-14 11:50:34 jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/kafka/logs, -Dlog4j.configuration=file:/kafka/config/log4j.properties 2024-02-14 11:50:34 jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.20, 11.0.20+8 2024-02-14 11:50:34 jvm.classpath = /kafka/libs/activation-1.1.1.jar:/kafka/libs/aopalliance-repackaged-2.6.1.jar:/kafka/libs/argparse4j-0.7.0.jar:/kafka/libs/audience-annotations-0.12.0.jar:/kafka/libs/caffeine-2.9.3.jar:/kafka/libs/checker-qual-3.19.0.jar:/kafka/libs/commons-beanutils-1.9.4.jar:/kafka/libs/commons-cli-1.4.jar:/kafka/libs/commons-collections-3.2.2.jar:/kafka/libs/commons-digester-2.1.jar:/kafka/libs/commons-io-2.11.0.jar:/kafka/libs/commons-lang3-3.8.1.jar:/kafka/libs/commons-logging-1.2.jar:/kafka/libs/commons-validator-1.7.jar:/kafka/libs/connect-api-3.6.1.jar:/kafka/libs/connect-basic-auth-extension-3.6.1.jar:/kafka/libs/connect-json-3.6.1.jar:/kafka/libs/connect-mirror-3.6.1.jar:/kafka/libs/connect-mirror-client-3.6.1.jar:/kafka/libs/connect-runtime-3.6.1.jar:/kafka/libs/connect-transforms-3.6.1.jar:/kafka/libs/error_prone_annotations-2.10.0.jar:/kafka/libs/hk2-api-2.6.1.jar:/kafka/libs/hk2-locator-2.6.1.jar:/kafka/libs/hk2-utils-2.6.1.jar:/kafka/libs/jackson-annotations-2.13.5.jar:/kafka/libs/jackson-core-2.13.5.jar:/kafka/libs/jackson-databind-2.13.5.jar:/kafka/libs/jackson-dataformat-csv-2.13.5.jar:/kafka/libs/jackson-datatype-jdk8-2.13.5.jar:/kafka/libs/jackson-jaxrs-base-2.13.5.jar:/kafka/libs/jackson-jaxrs-json-provider-2.13.5.jar:/kafka/libs/jackson-module-jaxb-annotations-2.13.5.jar:/kafka/libs/jackson-module-scala_2.13-2.13.5.jar:/kafka/libs/jakarta.activation-api-1.2.2.jar:/kafka/libs/jakarta.annotation-api-1.3.5.jar:/kafka/libs/jakarta.inject-2.6.1.jar:/kafka/libs/jakarta.validation-api-2.0.2.jar:/kafka/libs/jakarta.ws.rs-api-2.1.6.jar:/kafka/libs/jakarta.xml.bind-api-2.3.3.jar:/kafka/libs/javassist-3.29.2-GA.jar:/kafka/libs/javax.activation-api-1.2.0.jar:/kafka/libs/javax.annotation-api-1.3.2.jar:/kafka/libs/javax.servlet-api-3.1.0.jar:/kafka/libs/javax.ws.rs-api-2.1.1.jar:/kafka/libs/jaxb-api-2.3.1.jar:/kafka/libs/jersey-client-2.39.1.jar:/kafka/libs/jersey-common-2.39.1.jar:/kafka/libs/jersey-container-servlet-2.39.1.jar:/kafka/libs/jersey-container-servlet-core-2.39.1.jar:/kafka/libs/jersey-hk2-2.39.1.jar:/kafka/libs/jersey-server-2.39.1.jar:/kafka/libs/jetty-client-9.4.52.v20230823.jar:/kafka/libs/jetty-continuation-9.4.52.v20230823.jar:/kafka/libs/jetty-http-9.4.52.v20230823.jar:/kafka/libs/jetty-io-9.4.52.v20230823.jar:/kafka/libs/jetty-security-9.4.52.v20230823.jar:/kafka/libs/jetty-server-9.4.52.v20230823.jar:/kafka/libs/jetty-servlet-9.4.52.v20230823.jar:/kafka/libs/jetty-servlets-9.4.52.v20230823.jar:/kafka/libs/jetty-util-9.4.52.v20230823.jar:/kafka/libs/jetty-util-ajax-9.4.52.v20230823.jar:/kafka/libs/jline-3.22.0.jar:/kafka/libs/jolokia-jvm-1.7.2.jar:/kafka/libs/jopt-simple-5.0.4.jar:/kafka/libs/jose4j-0.9.3.jar:/kafka/libs/jsr305-3.0.2.jar:/kafka/libs/kafka-clients-3.6.1.jar:/kafka/libs/kafka-group-coordinator-3.6.1.jar:/kafka/libs/kafka-log4j-appender-3.6.1.jar:/kafka/libs/kafka-metadata-3.6.1.jar:/kafka/libs/kafka-raft-3.6.1.jar:/kafka/libs/kafka-server-common-3.6.1.jar:/kafka/libs/kafka-shell-3.6.1.jar:/kafka/libs/kafka-storage-3.6.1.jar:/kafka/libs/kafka-storage-api-3.6.1.jar:/kafka/libs/kafka-streams-3.6.1.jar:/kafka/libs/kafka-streams-examples-3.6.1.jar:/kafka/libs/kafka-streams-scala_2.13-3.6.1.jar:/kafka/libs/kafka-streams-test-utils-3.6.1.jar:/kafka/libs/kafka-tools-3.6.1.jar:/kafka/libs/kafka-tools-api-3.6.1.jar:/kafka/libs/kafka_2.13-3.6.1.jar:/kafka/libs/lz4-java-1.8.0.jar:/kafka/libs/maven-artifact-3.8.8.jar:/kafka/libs/metrics-core-2.2.0.jar:/kafka/libs/metrics-core-4.1.12.1.jar:/kafka/libs/netty-buffer-4.1.100.Final.jar:/kafka/libs/netty-codec-4.1.100.Final.jar:/kafka/libs/netty-common-4.1.100.Final.jar:/kafka/libs/netty-handler-4.1.100.Final.jar:/kafka/libs/netty-resolver-4.1.100.Final.jar:/kafka/libs/netty-transport-4.1.100.Final.jar:/kafka/libs/netty-transport-classes-epoll-4.1.100.Final.jar:/kafka/libs/netty-transport-native-epoll-4.1.100.Final.jar:/kafka/libs/netty-transport-native-unix-common-4.1.100.Final.jar:/kafka/libs/osgi-resource-locator-1.0.3.jar:/kafka/libs/paranamer-2.8.jar:/kafka/libs/pcollections-4.0.1.jar:/kafka/libs/plexus-utils-3.3.1.jar:/kafka/libs/reflections-0.10.2.jar:/kafka/libs/reload4j-1.2.25.jar:/kafka/libs/rocksdbjni-7.9.2.jar:/kafka/libs/scala-collection-compat_2.13-2.10.0.jar:/kafka/libs/scala-java8-compat_2.13-1.0.2.jar:/kafka/libs/scala-library-2.13.11.jar:/kafka/libs/scala-logging_2.13-3.9.4.jar:/kafka/libs/scala-reflect-2.13.11.jar:/kafka/libs/slf4j-api-1.7.36.jar:/kafka/libs/slf4j-reload4j-1.7.36.jar:/kafka/libs/snappy-java-1.1.10.5.jar:/kafka/libs/swagger-annotations-2.2.8.jar:/kafka/libs/trogdor-3.6.1.jar:/kafka/libs/zookeeper-3.8.3.jar:/kafka/libs/zookeeper-jute-3.8.3.jar:/kafka/libs/zstd-jni-1.5.5-1.jar 2024-02-14 11:50:34 os.spec = Linux, aarch64, 5.15.49-linuxkit-pr 2024-02-14 11:50:34 os.vcpus = 6 2024-02-14 11:50:34 [org.apache.kafka.connect.runtime.WorkerInfo] 2024-02-14 11:50:34 2024-02-14 10:50:34,123 INFO || Scanning for plugin classes. This might take a moment ... [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-02-14 11:50:34 2024-02-14 10:50:34,146 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,185 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:34 2024-02-14 10:50:34,337 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,345 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,393 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:34 2024-02-14 10:50:34,437 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,560 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,632 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:34 2024-02-14 10:50:34,691 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,692 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,705 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:34 2024-02-14 10:50:34,758 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,820 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,892 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:34 2024-02-14 10:50:34,960 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,962 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:34 2024-02-14 10:50:34,971 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:35 2024-02-14 10:50:35,034 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,034 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,046 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:35 2024-02-14 10:50:35,097 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,097 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,108 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:35 2024-02-14 10:50:35,158 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,159 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,162 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:35 2024-02-14 10:50:35,186 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,186 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,197 INFO || Using up-to-date JsonConverter implementation [io.debezium.converters.CloudEventsConverter] 2024-02-14 11:50:35 2024-02-14 10:50:35,224 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,227 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,229 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@3d4eac69 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,233 INFO || Scanning plugins with ServiceLoaderScanner took 1088 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,238 INFO || Loading plugin from: /kafka/connect/debezium-connector-postgres [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,554 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-postgres/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:35 2024-02-14 10:50:35,554 INFO || Loading plugin from: /kafka/connect/debezium-connector-jdbc [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:36 2024-02-14 10:50:36,264 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-jdbc/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:36 2024-02-14 10:50:36,273 INFO || Loading plugin from: /kafka/connect/debezium-connector-spanner [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:36 2024-02-14 10:50:36,946 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-spanner/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:36 2024-02-14 10:50:36,947 INFO || Loading plugin from: /kafka/connect/debezium-connector-sqlserver [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,017 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-sqlserver/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,023 INFO || Loading plugin from: /kafka/connect/debezium-connector-oracle [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,442 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-oracle/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,442 INFO || Loading plugin from: /kafka/connect/debezium-connector-mongodb [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,529 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mongodb/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,529 INFO || Loading plugin from: /kafka/connect/debezium-connector-db2 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,568 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-db2/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:37 2024-02-14 10:50:37,568 INFO || Loading plugin from: /kafka/connect/debezium-connector-vitess [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,081 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-vitess/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,081 INFO || Loading plugin from: /kafka/connect/debezium-connector-informix [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,117 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-informix/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,117 INFO || Loading plugin from: /kafka/connect/debezium-connector-mysql [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,412 INFO || Registered loader: PluginClassLoader{pluginLocation=file:/kafka/connect/debezium-connector-mysql/} [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:38 2024-02-14 10:50:38,413 INFO || Loading plugin from: classpath [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:39 2024-02-14 10:50:39,055 INFO || Registered loader: jdk.internal.loader.ClassLoaders$AppClassLoader@3d4eac69 [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:39 2024-02-14 10:50:39,055 INFO || Scanning plugins with ReflectionScanner took 3817 ms [org.apache.kafka.connect.runtime.isolation.PluginScanner] 2024-02-14 11:50:39 2024-02-14 10:50:39,057 WARN || All plugins have ServiceLoader manifests, consider reconfiguring plugin.discovery=service_load [org.apache.kafka.connect.runtime.isolation.Plugins] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,058 INFO || Added plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.HeaderFrom$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,059 INFO || Added plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'VitessConnector' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'MySql' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'MirrorCheckpointConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'HeaderToValue' to plugin 'io.debezium.transforms.HeaderToValue' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'Float' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'SimpleHeaderConverter' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'SqlServerConnector' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'DirectoryConfigProvider' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'TimezoneConverter' to plugin 'io.debezium.transforms.TimezoneConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,060 INFO || Added alias 'Simple' to plugin 'org.apache.kafka.connect.storage.SimpleHeaderConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumPostgres' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'AllConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'MirrorSource' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Directory' to plugin 'org.apache.kafka.common.config.provider.DirectoryConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'MirrorHeartbeat' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'JsonConverter' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumMySql' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'JdbcSinkConnector' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumPostgresConnectRestExtension' to plugin 'io.debezium.connector.postgresql.rest.DebeziumPostgresConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'SpannerConnector' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'MongoDb' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Postgres' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Short' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'ByLogicalTableRouter' to plugin 'io.debezium.transforms.ByLogicalTableRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'FileConfigProvider' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'SchemaChangeEventFilter' to plugin 'io.debezium.transforms.SchemaChangeEventFilter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'ConvertCloudEventToSaveableForm' to plugin 'io.debezium.connector.jdbc.transforms.ConvertCloudEventToSaveableForm' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Long' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'FloatConverter' to plugin 'org.apache.kafka.connect.converters.FloatConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Spanner' to plugin 'io.debezium.connector.spanner.SpannerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'ActivateTracingSpan' to plugin 'io.debezium.transforms.tracing.ActivateTracingSpan' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumSqlServerConnectRestExtension' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'MirrorHeartbeatConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorHeartbeatConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Oracle' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'PrincipalConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Integer' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Filter' to plugin 'org.apache.kafka.connect.transforms.Filter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'Informix' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'ExtractNewDocumentState' to plugin 'io.debezium.connector.mongodb.transforms.ExtractNewDocumentState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'CloudEventsConverter' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumOracle' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'EnvVar' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'EnvVarConfigProvider' to plugin 'org.apache.kafka.common.config.provider.EnvVarConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'MySqlConnector' to plugin 'io.debezium.connector.mysql.MySqlConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'DebeziumSqlServer' to plugin 'io.debezium.connector.sqlserver.rest.DebeziumSqlServerConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,061 INFO || Added alias 'PartitionRouting' to plugin 'io.debezium.transforms.partitions.PartitionRouting' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'StringConverter' to plugin 'org.apache.kafka.connect.storage.StringConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'MongoDbConnector' to plugin 'io.debezium.connector.mongodb.MongoDbConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'IntegerConverter' to plugin 'org.apache.kafka.connect.converters.IntegerConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'LongConverter' to plugin 'org.apache.kafka.connect.converters.LongConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DropHeaders' to plugin 'org.apache.kafka.connect.transforms.DropHeaders' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'ExtractSchemaToNewRecord' to plugin 'io.debezium.transforms.ExtractSchemaToNewRecord' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'BinaryData' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'ReadToInsertEvent' to plugin 'io.debezium.connector.mysql.transforms.ReadToInsertEvent' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'ShortConverter' to plugin 'org.apache.kafka.connect.converters.ShortConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'CloudEvents' to plugin 'io.debezium.converters.CloudEventsConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DebeziumOracleConnectRestExtension' to plugin 'io.debezium.connector.oracle.rest.DebeziumOracleConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'ExtractNewRecordState' to plugin 'io.debezium.transforms.ExtractNewRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DebeziumMongoDb' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Db2' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Db2Connector' to plugin 'io.debezium.connector.db2.Db2Connector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Vitess' to plugin 'io.debezium.connector.vitess.VitessConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'InformixConnector' to plugin 'io.debezium.connector.informix.InformixConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DebeziumMongoDbConnectRestExtension' to plugin 'io.debezium.connector.mongodb.rest.DebeziumMongoDbConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'MirrorCheckpoint' to plugin 'org.apache.kafka.connect.mirror.MirrorCheckpointConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'ExtractChangedRecordState' to plugin 'io.debezium.transforms.ExtractChangedRecordState' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'OracleConnector' to plugin 'io.debezium.connector.oracle.OracleConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'SqlServer' to plugin 'io.debezium.connector.sqlserver.SqlServerConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DebeziumMySqlConnectRestExtension' to plugin 'io.debezium.connector.mysql.rest.DebeziumMySqlConnectRestExtension' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'JdbcSink' to plugin 'io.debezium.connector.jdbc.JdbcSinkConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'NoneConnectorClientConfigOverridePolicy' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'Double' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'EventRouter' to plugin 'io.debezium.transforms.outbox.EventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'File' to plugin 'org.apache.kafka.common.config.provider.FileConfigProvider' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'DoubleConverter' to plugin 'org.apache.kafka.connect.converters.DoubleConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'BinaryDataConverter' to plugin 'io.debezium.converters.BinaryDataConverter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'TimescaleDb' to plugin 'io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'MirrorSourceConnector' to plugin 'org.apache.kafka.connect.mirror.MirrorSourceConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'PostgresConnector' to plugin 'io.debezium.connector.postgresql.PostgresConnector' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,062 INFO || Added alias 'MongoEventRouter' to plugin 'io.debezium.connector.mongodb.transforms.outbox.MongoEventRouter' [org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader] 2024-02-14 11:50:39 2024-02-14 10:50:39,083 INFO || DistributedConfig values: 2024-02-14 11:50:39 access.control.allow.methods = 2024-02-14 11:50:39 access.control.allow.origin = 2024-02-14 11:50:39 admin.listeners = null 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 2024-02-14 11:50:39 config.providers = [] 2024-02-14 11:50:39 config.storage.replication.factor = 1 2024-02-14 11:50:39 config.storage.topic = my_connect_configs 2024-02-14 11:50:39 connect.protocol = sessioned 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 connector.client.config.override.policy = All 2024-02-14 11:50:39 exactly.once.source.support = disabled 2024-02-14 11:50:39 group.id = 1 2024-02-14 11:50:39 header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter 2024-02-14 11:50:39 heartbeat.interval.ms = 3000 2024-02-14 11:50:39 inter.worker.key.generation.algorithm = HmacSHA256 2024-02-14 11:50:39 inter.worker.key.size = null 2024-02-14 11:50:39 inter.worker.key.ttl.ms = 3600000 2024-02-14 11:50:39 inter.worker.signature.algorithm = HmacSHA256 2024-02-14 11:50:39 inter.worker.verification.algorithms = [HmacSHA256] 2024-02-14 11:50:39 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:39 listeners = [http://:8083] 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 offset.flush.interval.ms = 60000 2024-02-14 11:50:39 offset.flush.timeout.ms = 5000 2024-02-14 11:50:39 offset.storage.partitions = 25 2024-02-14 11:50:39 offset.storage.replication.factor = 1 2024-02-14 11:50:39 offset.storage.topic = my_connect_offsets 2024-02-14 11:50:39 plugin.discovery = hybrid_warn 2024-02-14 11:50:39 plugin.path = [/kafka/connect] 2024-02-14 11:50:39 rebalance.timeout.ms = 60000 2024-02-14 11:50:39 receive.buffer.bytes = 32768 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 40000 2024-02-14 11:50:39 response.http.headers.config = 2024-02-14 11:50:39 rest.advertised.host.name = 192.168.0.6 2024-02-14 11:50:39 rest.advertised.listener = null 2024-02-14 11:50:39 rest.advertised.port = 8083 2024-02-14 11:50:39 rest.extension.classes = [] 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 scheduled.rebalance.max.delay.ms = 300000 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 session.timeout.ms = 10000 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.client.auth = none 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 status.storage.partitions = 5 2024-02-14 11:50:39 status.storage.replication.factor = 1 2024-02-14 11:50:39 status.storage.topic = my_connect_statuses 2024-02-14 11:50:39 task.shutdown.graceful.timeout.ms = 10000 2024-02-14 11:50:39 topic.creation.enable = true 2024-02-14 11:50:39 topic.tracking.allow.reset = true 2024-02-14 11:50:39 topic.tracking.enable = true 2024-02-14 11:50:39 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:39 worker.sync.timeout.ms = 3000 2024-02-14 11:50:39 worker.unsync.backoff.ms = 300000 2024-02-14 11:50:39 [org.apache.kafka.connect.runtime.distributed.DistributedConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,084 INFO || Creating Kafka admin client [org.apache.kafka.connect.runtime.WorkerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,090 INFO || AdminClientConfig values: 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 2024-02-14 11:50:39 connections.max.idle.ms = 300000 2024-02-14 11:50:39 default.api.timeout.ms = 60000 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 receive.buffer.bytes = 65536 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retries = 2147483647 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,119 INFO || These configurations '[config.storage.topic, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,120 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,120 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,120 INFO || Kafka startTimeMs: 1707907839119 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,264 INFO || Kafka cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.connect.runtime.WorkerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,265 INFO || App info kafka.admin.client for adminclient-1 unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,268 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:39 2024-02-14 10:50:39,268 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:39 2024-02-14 10:50:39,268 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:39 2024-02-14 10:50:39,271 INFO || PublicConfig values: 2024-02-14 11:50:39 access.control.allow.methods = 2024-02-14 11:50:39 access.control.allow.origin = 2024-02-14 11:50:39 admin.listeners = null 2024-02-14 11:50:39 listeners = [http://:8083] 2024-02-14 11:50:39 response.http.headers.config = 2024-02-14 11:50:39 rest.advertised.host.name = 192.168.0.6 2024-02-14 11:50:39 rest.advertised.listener = null 2024-02-14 11:50:39 rest.advertised.port = 8083 2024-02-14 11:50:39 rest.extension.classes = [] 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.client.auth = none 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 topic.tracking.allow.reset = true 2024-02-14 11:50:39 topic.tracking.enable = true 2024-02-14 11:50:39 [org.apache.kafka.connect.runtime.rest.RestServerConfig$PublicConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,276 INFO || Logging initialized @5523ms to org.eclipse.jetty.util.log.Slf4jLog [org.eclipse.jetty.util.log] 2024-02-14 11:50:39 2024-02-14 10:50:39,298 INFO || Added connector for http://:8083 [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,298 INFO || Initializing REST server [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,312 INFO || jetty-9.4.52.v20230823; built: 2023-08-23T19:29:37.669Z; git: abdcda73818a1a2c705da276edb0bf6581e7997e; jvm 11.0.20+8 [org.eclipse.jetty.server.Server] 2024-02-14 11:50:39 2024-02-14 10:50:39,331 INFO || Started http_8083@6c54bc3{HTTP/1.1, (http/1.1)}{0.0.0.0:8083} [org.eclipse.jetty.server.AbstractConnector] 2024-02-14 11:50:39 2024-02-14 10:50:39,332 INFO || Started @5579ms [org.eclipse.jetty.server.Server] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || Advertised URI: http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || REST server listening at http://192.168.0.6:8083/, advertising URL http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || Advertised URI: http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || REST admin endpoints at http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || Advertised URI: http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,344 INFO || Setting up All Policy for ConnectorClientConfigOverride. This will allow all client configurations to be overridden [org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy] 2024-02-14 11:50:39 2024-02-14 10:50:39,348 INFO || JsonConverterConfig values: 2024-02-14 11:50:39 converter.type = key 2024-02-14 11:50:39 decimal.format = BASE64 2024-02-14 11:50:39 replace.null.with.default = true 2024-02-14 11:50:39 schemas.cache.size = 1000 2024-02-14 11:50:39 schemas.enable = false 2024-02-14 11:50:39 [org.apache.kafka.connect.json.JsonConverterConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,363 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,363 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,363 INFO || Kafka startTimeMs: 1707907839363 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,367 INFO || JsonConverterConfig values: 2024-02-14 11:50:39 converter.type = key 2024-02-14 11:50:39 decimal.format = BASE64 2024-02-14 11:50:39 replace.null.with.default = true 2024-02-14 11:50:39 schemas.cache.size = 1000 2024-02-14 11:50:39 schemas.enable = false 2024-02-14 11:50:39 [org.apache.kafka.connect.json.JsonConverterConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,367 INFO || JsonConverterConfig values: 2024-02-14 11:50:39 converter.type = value 2024-02-14 11:50:39 decimal.format = BASE64 2024-02-14 11:50:39 replace.null.with.default = true 2024-02-14 11:50:39 schemas.cache.size = 1000 2024-02-14 11:50:39 schemas.enable = false 2024-02-14 11:50:39 [org.apache.kafka.connect.json.JsonConverterConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,377 INFO || Advertised URI: http://192.168.0.6:8083/ [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,392 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,392 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,392 INFO || Kafka startTimeMs: 1707907839392 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,394 INFO || Kafka Connect worker initialization took 5273ms [org.apache.kafka.connect.cli.AbstractConnectCli] 2024-02-14 11:50:39 2024-02-14 10:50:39,394 INFO || Kafka Connect starting [org.apache.kafka.connect.runtime.Connect] 2024-02-14 11:50:39 2024-02-14 10:50:39,395 INFO || Initializing REST resources [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,396 INFO || [Worker clientId=connect-1, groupId=1] Herder starting [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:39 2024-02-14 10:50:39,397 INFO || Worker starting [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:39 2024-02-14 10:50:39,397 INFO || Starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2024-02-14 11:50:39 2024-02-14 10:50:39,397 INFO || Starting KafkaBasedLog with topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,398 INFO || AdminClientConfig values: 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-shared-admin 2024-02-14 11:50:39 connections.max.idle.ms = 300000 2024-02-14 11:50:39 default.api.timeout.ms = 60000 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 receive.buffer.bytes = 65536 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retries = 2147483647 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,402 INFO || These configurations '[config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, group.id, rest.advertised.port, rest.host.name, task.shutdown.graceful.timeout.ms, plugin.path, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, value.converter.schemas.enable, offset.storage.replication.factor, offset.storage.topic, value.converter, key.converter]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,402 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,402 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,402 INFO || Kafka startTimeMs: 1707907839402 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,426 INFO || Adding admin resources to main listener [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,454 INFO || DefaultSessionIdManager workerName=node0 [org.eclipse.jetty.server.session] 2024-02-14 11:50:39 2024-02-14 10:50:39,454 INFO || No SessionScavenger set, using defaults [org.eclipse.jetty.server.session] 2024-02-14 11:50:39 2024-02-14 10:50:39,455 INFO || node0 Scavenging every 600000ms [org.eclipse.jetty.server.session] 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 2024-02-14 11:50:39 WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored. 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 2024-02-14 11:50:39 WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored. 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 2024-02-14 11:50:39 WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.InternalConnectResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.InternalConnectResource will be ignored. 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 2024-02-14 11:50:39 WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored. 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 2024-02-14 11:50:39 WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.LoggingResource will be ignored. 2024-02-14 11:50:39 Feb 14, 2024 10:50:39 AM org.glassfish.jersey.internal.Errors logErrors 2024-02-14 11:50:39 WARNING: The following warnings have been detected: WARNING: The (sub)resource method listLoggers in org.apache.kafka.connect.runtime.rest.resources.LoggingResource contains empty path annotation. 2024-02-14 11:50:39 WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. 2024-02-14 11:50:39 WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation. 2024-02-14 11:50:39 WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation. 2024-02-14 11:50:39 WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation. 2024-02-14 11:50:39 2024-02-14 11:50:39 2024-02-14 10:50:39,713 INFO || Started o.e.j.s.ServletContextHandler@62b9757d{/,null,AVAILABLE} [org.eclipse.jetty.server.handler.ContextHandler] 2024-02-14 11:50:39 2024-02-14 10:50:39,713 INFO || REST resources initialized; server is started and ready to handle requests [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:39 2024-02-14 10:50:39,713 INFO || Kafka Connect started [org.apache.kafka.connect.runtime.Connect] 2024-02-14 11:50:39 2024-02-14 10:50:39,724 INFO || Created topic (name=my_connect_offsets, numPartitions=25, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:29092 [org.apache.kafka.connect.util.TopicAdmin] 2024-02-14 11:50:39 2024-02-14 10:50:39,729 INFO || ProducerConfig values: 2024-02-14 11:50:39 acks = -1 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 batch.size = 16384 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 buffer.memory = 33554432 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-offsets 2024-02-14 11:50:39 compression.type = none 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 delivery.timeout.ms = 2147483647 2024-02-14 11:50:39 enable.idempotence = false 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:39 linger.ms = 0 2024-02-14 11:50:39 max.block.ms = 60000 2024-02-14 11:50:39 max.in.flight.requests.per.connection = 1 2024-02-14 11:50:39 max.request.size = 1048576 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metadata.max.idle.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partitioner.adaptive.partitioning.enable = true 2024-02-14 11:50:39 partitioner.availability.timeout.ms = 0 2024-02-14 11:50:39 partitioner.class = null 2024-02-14 11:50:39 partitioner.ignore.keys = false 2024-02-14 11:50:39 receive.buffer.bytes = 32768 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retries = 2147483647 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 transaction.timeout.ms = 60000 2024-02-14 11:50:39 transactional.id = null 2024-02-14 11:50:39 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:39 [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,737 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,737 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,737 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,737 INFO || Kafka startTimeMs: 1707907839737 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,741 INFO || ConsumerConfig values: 2024-02-14 11:50:39 allow.auto.create.topics = true 2024-02-14 11:50:39 auto.commit.interval.ms = 5000 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 auto.offset.reset = earliest 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 check.crcs = true 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-offsets 2024-02-14 11:50:39 client.rack = 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 default.api.timeout.ms = 60000 2024-02-14 11:50:39 enable.auto.commit = false 2024-02-14 11:50:39 exclude.internal.topics = true 2024-02-14 11:50:39 fetch.max.bytes = 52428800 2024-02-14 11:50:39 fetch.max.wait.ms = 500 2024-02-14 11:50:39 fetch.min.bytes = 1 2024-02-14 11:50:39 group.id = 1 2024-02-14 11:50:39 group.instance.id = null 2024-02-14 11:50:39 heartbeat.interval.ms = 3000 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 internal.leave.group.on.close = true 2024-02-14 11:50:39 internal.throw.on.fetch.stable.offset.unsupported = false 2024-02-14 11:50:39 isolation.level = read_uncommitted 2024-02-14 11:50:39 key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2024-02-14 11:50:39 max.partition.fetch.bytes = 1048576 2024-02-14 11:50:39 max.poll.interval.ms = 300000 2024-02-14 11:50:39 max.poll.records = 500 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 2024-02-14 11:50:39 receive.buffer.bytes = 65536 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 session.timeout.ms = 45000 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2024-02-14 11:50:39 [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,743 INFO || [Producer clientId=1-offsets] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,755 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,755 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,755 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,755 INFO || Kafka startTimeMs: 1707907839755 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,759 INFO || [Consumer clientId=1-offsets, groupId=1] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,763 INFO || [Consumer clientId=1-offsets, groupId=1] Assigned to partition(s): my_connect_offsets-0, my_connect_offsets-5, my_connect_offsets-10, my_connect_offsets-20, my_connect_offsets-15, my_connect_offsets-9, my_connect_offsets-11, my_connect_offsets-4, my_connect_offsets-16, my_connect_offsets-17, my_connect_offsets-3, my_connect_offsets-24, my_connect_offsets-23, my_connect_offsets-13, my_connect_offsets-18, my_connect_offsets-22, my_connect_offsets-2, my_connect_offsets-8, my_connect_offsets-12, my_connect_offsets-19, my_connect_offsets-14, my_connect_offsets-1, my_connect_offsets-6, my_connect_offsets-7, my_connect_offsets-21 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-5 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-10 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-20 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-15 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-9 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-11 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-16 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-17 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-24 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-23 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,764 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-13 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-18 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-22 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-8 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-12 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-19 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-14 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-6 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-7 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,765 INFO || [Consumer clientId=1-offsets, groupId=1] Seeking to earliest offset of partition my_connect_offsets-21 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-6 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-8 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-18 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-20 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-22 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-24 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-10 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-12 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-14 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-16 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-5 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-7 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-9 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,785 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-19 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-21 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-23 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-11 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-13 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-15 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || [Consumer clientId=1-offsets, groupId=1] Resetting offset for partition my_connect_offsets-17 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || Finished reading KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || Started KafkaBasedLog for topic my_connect_offsets [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,786 INFO || Finished reading offsets topic and starting KafkaOffsetBackingStore [org.apache.kafka.connect.storage.KafkaOffsetBackingStore] 2024-02-14 11:50:39 2024-02-14 10:50:39,787 INFO || Worker started [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:39 2024-02-14 10:50:39,788 INFO || Starting KafkaBasedLog with topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,829 INFO || Created topic (name=my_connect_statuses, numPartitions=5, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:29092 [org.apache.kafka.connect.util.TopicAdmin] 2024-02-14 11:50:39 2024-02-14 10:50:39,830 INFO || ProducerConfig values: 2024-02-14 11:50:39 acks = -1 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 batch.size = 16384 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 buffer.memory = 33554432 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-statuses 2024-02-14 11:50:39 compression.type = none 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 delivery.timeout.ms = 120000 2024-02-14 11:50:39 enable.idempotence = false 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 2024-02-14 11:50:39 linger.ms = 0 2024-02-14 11:50:39 max.block.ms = 60000 2024-02-14 11:50:39 max.in.flight.requests.per.connection = 1 2024-02-14 11:50:39 max.request.size = 1048576 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metadata.max.idle.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partitioner.adaptive.partitioning.enable = true 2024-02-14 11:50:39 partitioner.availability.timeout.ms = 0 2024-02-14 11:50:39 partitioner.class = null 2024-02-14 11:50:39 partitioner.ignore.keys = false 2024-02-14 11:50:39 receive.buffer.bytes = 32768 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retries = 0 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 transaction.timeout.ms = 60000 2024-02-14 11:50:39 transactional.id = null 2024-02-14 11:50:39 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:39 [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,832 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,832 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,832 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,833 INFO || Kafka startTimeMs: 1707907839832 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,833 INFO || ConsumerConfig values: 2024-02-14 11:50:39 allow.auto.create.topics = true 2024-02-14 11:50:39 auto.commit.interval.ms = 5000 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 auto.offset.reset = earliest 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 check.crcs = true 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-statuses 2024-02-14 11:50:39 client.rack = 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 default.api.timeout.ms = 60000 2024-02-14 11:50:39 enable.auto.commit = false 2024-02-14 11:50:39 exclude.internal.topics = true 2024-02-14 11:50:39 fetch.max.bytes = 52428800 2024-02-14 11:50:39 fetch.max.wait.ms = 500 2024-02-14 11:50:39 fetch.min.bytes = 1 2024-02-14 11:50:39 group.id = 1 2024-02-14 11:50:39 group.instance.id = null 2024-02-14 11:50:39 heartbeat.interval.ms = 3000 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 internal.leave.group.on.close = true 2024-02-14 11:50:39 internal.throw.on.fetch.stable.offset.unsupported = false 2024-02-14 11:50:39 isolation.level = read_uncommitted 2024-02-14 11:50:39 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2024-02-14 11:50:39 max.partition.fetch.bytes = 1048576 2024-02-14 11:50:39 max.poll.interval.ms = 300000 2024-02-14 11:50:39 max.poll.records = 500 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 2024-02-14 11:50:39 receive.buffer.bytes = 65536 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 session.timeout.ms = 45000 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2024-02-14 11:50:39 [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,835 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,835 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,835 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,835 INFO || Kafka startTimeMs: 1707907839835 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,835 INFO || [Producer clientId=1-statuses] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Assigned to partition(s): my_connect_statuses-0, my_connect_statuses-1, my_connect_statuses-4, my_connect_statuses-2, my_connect_statuses-3 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-1 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-4 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-2 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,839 INFO || [Consumer clientId=1-statuses, groupId=1] Seeking to earliest offset of partition my_connect_statuses-3 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,844 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,844 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,844 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,844 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-3 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,844 INFO || [Consumer clientId=1-statuses, groupId=1] Resetting offset for partition my_connect_statuses-4 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,845 INFO || Finished reading KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,845 INFO || Started KafkaBasedLog for topic my_connect_statuses [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,848 INFO || Starting KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2024-02-14 11:50:39 2024-02-14 10:50:39,848 INFO || Starting KafkaBasedLog with topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,876 INFO || Created topic (name=my_connect_configs, numPartitions=1, replicationFactor=1, replicasAssignments=null, configs={cleanup.policy=compact}) on brokers at kafka:29092 [org.apache.kafka.connect.util.TopicAdmin] 2024-02-14 11:50:39 2024-02-14 10:50:39,876 INFO || ProducerConfig values: 2024-02-14 11:50:39 acks = -1 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 batch.size = 16384 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 buffer.memory = 33554432 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-configs 2024-02-14 11:50:39 compression.type = none 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 delivery.timeout.ms = 2147483647 2024-02-14 11:50:39 enable.idempotence = false 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 2024-02-14 11:50:39 linger.ms = 0 2024-02-14 11:50:39 max.block.ms = 60000 2024-02-14 11:50:39 max.in.flight.requests.per.connection = 1 2024-02-14 11:50:39 max.request.size = 1048576 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metadata.max.idle.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partitioner.adaptive.partitioning.enable = true 2024-02-14 11:50:39 partitioner.availability.timeout.ms = 0 2024-02-14 11:50:39 partitioner.class = null 2024-02-14 11:50:39 partitioner.ignore.keys = false 2024-02-14 11:50:39 receive.buffer.bytes = 32768 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retries = 2147483647 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 transaction.timeout.ms = 60000 2024-02-14 11:50:39 transactional.id = null 2024-02-14 11:50:39 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:39 [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,877 INFO || These configurations '[group.id, rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,877 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,877 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,877 INFO || Kafka startTimeMs: 1707907839877 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,878 INFO || ConsumerConfig values: 2024-02-14 11:50:39 allow.auto.create.topics = true 2024-02-14 11:50:39 auto.commit.interval.ms = 5000 2024-02-14 11:50:39 auto.include.jmx.reporter = true 2024-02-14 11:50:39 auto.offset.reset = earliest 2024-02-14 11:50:39 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:39 check.crcs = true 2024-02-14 11:50:39 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:39 client.id = 1-configs 2024-02-14 11:50:39 client.rack = 2024-02-14 11:50:39 connections.max.idle.ms = 540000 2024-02-14 11:50:39 default.api.timeout.ms = 60000 2024-02-14 11:50:39 enable.auto.commit = false 2024-02-14 11:50:39 exclude.internal.topics = true 2024-02-14 11:50:39 fetch.max.bytes = 52428800 2024-02-14 11:50:39 fetch.max.wait.ms = 500 2024-02-14 11:50:39 fetch.min.bytes = 1 2024-02-14 11:50:39 group.id = 1 2024-02-14 11:50:39 group.instance.id = null 2024-02-14 11:50:39 heartbeat.interval.ms = 3000 2024-02-14 11:50:39 interceptor.classes = [] 2024-02-14 11:50:39 internal.leave.group.on.close = true 2024-02-14 11:50:39 internal.throw.on.fetch.stable.offset.unsupported = false 2024-02-14 11:50:39 isolation.level = read_uncommitted 2024-02-14 11:50:39 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2024-02-14 11:50:39 max.partition.fetch.bytes = 1048576 2024-02-14 11:50:39 max.poll.interval.ms = 300000 2024-02-14 11:50:39 max.poll.records = 500 2024-02-14 11:50:39 metadata.max.age.ms = 300000 2024-02-14 11:50:39 metric.reporters = [] 2024-02-14 11:50:39 metrics.num.samples = 2 2024-02-14 11:50:39 metrics.recording.level = INFO 2024-02-14 11:50:39 metrics.sample.window.ms = 30000 2024-02-14 11:50:39 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 2024-02-14 11:50:39 receive.buffer.bytes = 65536 2024-02-14 11:50:39 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:39 reconnect.backoff.ms = 50 2024-02-14 11:50:39 request.timeout.ms = 30000 2024-02-14 11:50:39 retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.client.callback.handler.class = null 2024-02-14 11:50:39 sasl.jaas.config = null 2024-02-14 11:50:39 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:39 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:39 sasl.kerberos.service.name = null 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:39 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.callback.handler.class = null 2024-02-14 11:50:39 sasl.login.class = null 2024-02-14 11:50:39 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:39 sasl.login.read.timeout.ms = null 2024-02-14 11:50:39 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:39 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:39 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:39 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:39 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.mechanism = GSSAPI 2024-02-14 11:50:39 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:39 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:39 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:39 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:39 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:39 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:39 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:39 security.protocol = PLAINTEXT 2024-02-14 11:50:39 security.providers = null 2024-02-14 11:50:39 send.buffer.bytes = 131072 2024-02-14 11:50:39 session.timeout.ms = 45000 2024-02-14 11:50:39 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:39 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:39 ssl.cipher.suites = null 2024-02-14 11:50:39 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:39 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:39 ssl.engine.factory.class = null 2024-02-14 11:50:39 ssl.key.password = null 2024-02-14 11:50:39 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:39 ssl.keystore.certificate.chain = null 2024-02-14 11:50:39 ssl.keystore.key = null 2024-02-14 11:50:39 ssl.keystore.location = null 2024-02-14 11:50:39 ssl.keystore.password = null 2024-02-14 11:50:39 ssl.keystore.type = JKS 2024-02-14 11:50:39 ssl.protocol = TLSv1.3 2024-02-14 11:50:39 ssl.provider = null 2024-02-14 11:50:39 ssl.secure.random.implementation = null 2024-02-14 11:50:39 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:39 ssl.truststore.certificates = null 2024-02-14 11:50:39 ssl.truststore.location = null 2024-02-14 11:50:39 ssl.truststore.password = null 2024-02-14 11:50:39 ssl.truststore.type = JKS 2024-02-14 11:50:39 value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer 2024-02-14 11:50:39 [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,880 INFO || These configurations '[rest.advertised.port, task.shutdown.graceful.timeout.ms, plugin.path, metrics.context.connect.kafka.cluster.id, status.storage.replication.factor, offset.storage.topic, value.converter, key.converter, config.storage.topic, metrics.context.connect.group.id, rest.advertised.host.name, status.storage.topic, rest.host.name, offset.flush.timeout.ms, config.storage.replication.factor, offset.flush.interval.ms, rest.port, key.converter.schemas.enable, value.converter.schemas.enable, offset.storage.replication.factor]' were supplied but are not used yet. [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:39 2024-02-14 10:50:39,880 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,880 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,880 INFO || Kafka startTimeMs: 1707907839880 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:39 2024-02-14 10:50:39,881 INFO || [Producer clientId=1-configs] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,882 INFO || [Consumer clientId=1-configs, groupId=1] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:39 2024-02-14 10:50:39,883 INFO || [Consumer clientId=1-configs, groupId=1] Assigned to partition(s): my_connect_configs-0 [org.apache.kafka.clients.consumer.KafkaConsumer] 2024-02-14 11:50:39 2024-02-14 10:50:39,883 INFO || [Consumer clientId=1-configs, groupId=1] Seeking to earliest offset of partition my_connect_configs-0 [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,887 INFO || [Consumer clientId=1-configs, groupId=1] Resetting offset for partition my_connect_configs-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:29092 (id: 1001 rack: null)], epoch=0}}. [org.apache.kafka.clients.consumer.internals.SubscriptionState] 2024-02-14 11:50:39 2024-02-14 10:50:39,887 INFO || Finished reading KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,887 INFO || Started KafkaBasedLog for topic my_connect_configs [org.apache.kafka.connect.util.KafkaBasedLog] 2024-02-14 11:50:39 2024-02-14 10:50:39,887 INFO || Started KafkaConfigBackingStore [org.apache.kafka.connect.storage.KafkaConfigBackingStore] 2024-02-14 11:50:39 2024-02-14 10:50:39,887 INFO || [Worker clientId=connect-1, groupId=1] Herder started [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:39 2024-02-14 10:50:39,892 INFO || [Worker clientId=connect-1, groupId=1] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:40 2024-02-14 10:50:40,201 INFO || [Worker clientId=connect-1, groupId=1] Discovered group coordinator kafka:29092 (id: 2147482646 rack: null) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:40 2024-02-14 10:50:40,203 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:40 2024-02-14 10:50:40,203 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:40 2024-02-14 10:50:40,214 INFO || [Worker clientId=connect-1, groupId=1] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:40 2024-02-14 10:50:40,214 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:43 2024-02-14 10:50:43,223 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=1, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:43 2024-02-14 10:50:43,257 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=1, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:43 2024-02-14 10:50:43,257 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 1 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', leaderUrl='http://192.168.0.6:8083/', offset=-1, connectorIds=[], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:43 2024-02-14 10:50:43,257 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset -1 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:43 2024-02-14 10:50:43,257 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:43 2024-02-14 10:50:43,296 INFO || [Worker clientId=connect-1, groupId=1] Session key updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,606 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,611 INFO || Using io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter [io.debezium.connector.mysql.MySqlConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,686 INFO || Successfully tested connection for jdbc:mysql://mariadb-slave:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=CONVERT_TO_NULL&connectTimeout=30000 with user 'replication' [io.debezium.connector.mysql.MySqlConnector] 2024-02-14 11:50:57 2024-02-14 10:50:57,688 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-02-14 11:50:57 2024-02-14 10:50:57,690 INFO || AbstractConfig values: 2024-02-14 11:50:57 [org.apache.kafka.common.config.AbstractConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,696 INFO || [Worker clientId=connect-1, groupId=1] Connector employee config updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,697 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,697 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,699 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=2, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,702 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=2, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,702 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 2 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', leaderUrl='http://192.168.0.6:8083/', offset=2, connectorIds=[employee], taskIds=[], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,702 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 2 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,703 INFO || [Worker clientId=connect-1, groupId=1] Starting connector employee [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,704 INFO || Creating connector employee of type io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,705 INFO || SourceConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,705 INFO || EnrichedConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 transforms.topicName.negate = false 2024-02-14 11:50:57 transforms.topicName.predicate = null 2024-02-14 11:50:57 transforms.topicName.regex = (.*)\.(.*)\.(.*) 2024-02-14 11:50:57 transforms.topicName.replacement = cdc.employee.$3.v1 2024-02-14 11:50:57 transforms.topicName.type = class org.apache.kafka.connect.transforms.RegexRouter 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,708 INFO || Instantiated connector employee with version 2.5.1.Final of type class io.debezium.connector.mysql.MySqlConnector [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,708 INFO || Finished creating connector employee [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,708 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,715 INFO || SourceConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,715 INFO || EnrichedConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 transforms.topicName.negate = false 2024-02-14 11:50:57 transforms.topicName.predicate = null 2024-02-14 11:50:57 transforms.topicName.regex = (.*)\.(.*)\.(.*) 2024-02-14 11:50:57 transforms.topicName.replacement = cdc.employee.$3.v1 2024-02-14 11:50:57 transforms.topicName.type = class org.apache.kafka.connect.transforms.RegexRouter 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,729 INFO || 192.168.0.1 - - [14/Feb/2024:10:50:57 +0000] "POST /connectors HTTP/1.1" 201 982 "-" "curl/8.4.0" 205 [org.apache.kafka.connect.runtime.rest.RestServer] 2024-02-14 11:50:57 2024-02-14 10:50:57,735 INFO || [Worker clientId=connect-1, groupId=1] Tasks [employee-0] configs updated [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,736 INFO || [Worker clientId=connect-1, groupId=1] Rebalance started [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,736 INFO || [Worker clientId=connect-1, groupId=1] (Re-)joining group [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,737 INFO || [Worker clientId=connect-1, groupId=1] Successfully joined group with generation Generation{generationId=3, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,740 INFO || [Worker clientId=connect-1, groupId=1] Successfully synced group in generation Generation{generationId=3, memberId='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', protocol='sessioned'} [org.apache.kafka.connect.runtime.distributed.WorkerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,740 INFO || [Worker clientId=connect-1, groupId=1] Joined group at generation 3 with protocol version 2 and got assignment: Assignment{error=0, leader='connect-1-15b502fc-41f1-44c7-a722-4e2a5c0d6e60', leaderUrl='http://192.168.0.6:8083/', offset=4, connectorIds=[employee], taskIds=[employee-0], revokedConnectorIds=[], revokedTaskIds=[], delay=0} with rebalance delay: 0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,740 INFO || [Worker clientId=connect-1, groupId=1] Starting connectors and tasks using config offset 4 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,741 INFO || [Worker clientId=connect-1, groupId=1] Starting task employee-0 [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,743 INFO || Creating task employee-0 [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,743 INFO || ConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.ConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,743 INFO || EnrichedConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 transforms.topicName.negate = false 2024-02-14 11:50:57 transforms.topicName.predicate = null 2024-02-14 11:50:57 transforms.topicName.regex = (.*)\.(.*)\.(.*) 2024-02-14 11:50:57 transforms.topicName.replacement = cdc.employee.$3.v1 2024-02-14 11:50:57 transforms.topicName.type = class org.apache.kafka.connect.transforms.RegexRouter 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,745 INFO || TaskConfig values: 2024-02-14 11:50:57 task.class = class io.debezium.connector.mysql.MySqlConnectorTask 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.TaskConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,746 INFO || Instantiated task employee-0 with version 2.5.1.Final of type io.debezium.connector.mysql.MySqlConnectorTask [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,747 INFO || JsonConverterConfig values: 2024-02-14 11:50:57 converter.type = key 2024-02-14 11:50:57 decimal.format = BASE64 2024-02-14 11:50:57 replace.null.with.default = true 2024-02-14 11:50:57 schemas.cache.size = 1000 2024-02-14 11:50:57 schemas.enable = true 2024-02-14 11:50:57 [org.apache.kafka.connect.json.JsonConverterConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,747 INFO || JsonConverterConfig values: 2024-02-14 11:50:57 converter.type = value 2024-02-14 11:50:57 decimal.format = BASE64 2024-02-14 11:50:57 replace.null.with.default = true 2024-02-14 11:50:57 schemas.cache.size = 1000 2024-02-14 11:50:57 schemas.enable = true 2024-02-14 11:50:57 [org.apache.kafka.connect.json.JsonConverterConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,747 INFO || Set up the key converter class org.apache.kafka.connect.json.JsonConverter for task employee-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,747 INFO || Set up the value converter class org.apache.kafka.connect.json.JsonConverter for task employee-0 using the connector config [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,747 INFO || Set up the header converter class org.apache.kafka.connect.storage.SimpleHeaderConverter for task employee-0 using the worker config [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,749 INFO || SourceConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.SourceConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,749 INFO || EnrichedConnectorConfig values: 2024-02-14 11:50:57 config.action.reload = restart 2024-02-14 11:50:57 connector.class = io.debezium.connector.mysql.MySqlConnector 2024-02-14 11:50:57 errors.log.enable = false 2024-02-14 11:50:57 errors.log.include.messages = false 2024-02-14 11:50:57 errors.retry.delay.max.ms = 60000 2024-02-14 11:50:57 errors.retry.timeout = 0 2024-02-14 11:50:57 errors.tolerance = none 2024-02-14 11:50:57 exactly.once.support = requested 2024-02-14 11:50:57 header.converter = null 2024-02-14 11:50:57 key.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 name = employee 2024-02-14 11:50:57 offsets.storage.topic = null 2024-02-14 11:50:57 predicates = [] 2024-02-14 11:50:57 tasks.max = 1 2024-02-14 11:50:57 topic.creation.groups = [] 2024-02-14 11:50:57 transaction.boundary = poll 2024-02-14 11:50:57 transaction.boundary.interval.ms = null 2024-02-14 11:50:57 transforms = [topicName] 2024-02-14 11:50:57 transforms.topicName.negate = false 2024-02-14 11:50:57 transforms.topicName.predicate = null 2024-02-14 11:50:57 transforms.topicName.regex = (.*)\.(.*)\.(.*) 2024-02-14 11:50:57 transforms.topicName.replacement = cdc.employee.$3.v1 2024-02-14 11:50:57 transforms.topicName.type = class org.apache.kafka.connect.transforms.RegexRouter 2024-02-14 11:50:57 value.converter = class org.apache.kafka.connect.json.JsonConverter 2024-02-14 11:50:57 [org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,750 INFO || Initializing: org.apache.kafka.connect.runtime.TransformationChain{org.apache.kafka.connect.transforms.RegexRouter} [org.apache.kafka.connect.runtime.Worker] 2024-02-14 11:50:57 2024-02-14 10:50:57,750 INFO || ProducerConfig values: 2024-02-14 11:50:57 acks = -1 2024-02-14 11:50:57 auto.include.jmx.reporter = true 2024-02-14 11:50:57 batch.size = 16384 2024-02-14 11:50:57 bootstrap.servers = [kafka:29092] 2024-02-14 11:50:57 buffer.memory = 33554432 2024-02-14 11:50:57 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:57 client.id = connector-producer-employee-0 2024-02-14 11:50:57 compression.type = none 2024-02-14 11:50:57 connections.max.idle.ms = 540000 2024-02-14 11:50:57 delivery.timeout.ms = 2147483647 2024-02-14 11:50:57 enable.idempotence = false 2024-02-14 11:50:57 interceptor.classes = [] 2024-02-14 11:50:57 key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:57 linger.ms = 0 2024-02-14 11:50:57 max.block.ms = 9223372036854775807 2024-02-14 11:50:57 max.in.flight.requests.per.connection = 1 2024-02-14 11:50:57 max.request.size = 1048576 2024-02-14 11:50:57 metadata.max.age.ms = 300000 2024-02-14 11:50:57 metadata.max.idle.ms = 300000 2024-02-14 11:50:57 metric.reporters = [] 2024-02-14 11:50:57 metrics.num.samples = 2 2024-02-14 11:50:57 metrics.recording.level = INFO 2024-02-14 11:50:57 metrics.sample.window.ms = 30000 2024-02-14 11:50:57 partitioner.adaptive.partitioning.enable = true 2024-02-14 11:50:57 partitioner.availability.timeout.ms = 0 2024-02-14 11:50:57 partitioner.class = null 2024-02-14 11:50:57 partitioner.ignore.keys = false 2024-02-14 11:50:57 receive.buffer.bytes = 32768 2024-02-14 11:50:57 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:57 reconnect.backoff.ms = 50 2024-02-14 11:50:57 request.timeout.ms = 30000 2024-02-14 11:50:57 retries = 2147483647 2024-02-14 11:50:57 retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.client.callback.handler.class = null 2024-02-14 11:50:57 sasl.jaas.config = null 2024-02-14 11:50:57 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:57 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:57 sasl.kerberos.service.name = null 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.callback.handler.class = null 2024-02-14 11:50:57 sasl.login.class = null 2024-02-14 11:50:57 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:57 sasl.login.read.timeout.ms = null 2024-02-14 11:50:57 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:57 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:57 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:57 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.mechanism = GSSAPI 2024-02-14 11:50:57 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:57 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:57 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:57 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:57 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:57 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:57 security.protocol = PLAINTEXT 2024-02-14 11:50:57 security.providers = null 2024-02-14 11:50:57 send.buffer.bytes = 131072 2024-02-14 11:50:57 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:57 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:57 ssl.cipher.suites = null 2024-02-14 11:50:57 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:57 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:57 ssl.engine.factory.class = null 2024-02-14 11:50:57 ssl.key.password = null 2024-02-14 11:50:57 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:57 ssl.keystore.certificate.chain = null 2024-02-14 11:50:57 ssl.keystore.key = null 2024-02-14 11:50:57 ssl.keystore.location = null 2024-02-14 11:50:57 ssl.keystore.password = null 2024-02-14 11:50:57 ssl.keystore.type = JKS 2024-02-14 11:50:57 ssl.protocol = TLSv1.3 2024-02-14 11:50:57 ssl.provider = null 2024-02-14 11:50:57 ssl.secure.random.implementation = null 2024-02-14 11:50:57 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:57 ssl.truststore.certificates = null 2024-02-14 11:50:57 ssl.truststore.location = null 2024-02-14 11:50:57 ssl.truststore.password = null 2024-02-14 11:50:57 ssl.truststore.type = JKS 2024-02-14 11:50:57 transaction.timeout.ms = 60000 2024-02-14 11:50:57 transactional.id = null 2024-02-14 11:50:57 value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer 2024-02-14 11:50:57 [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,752 INFO || These configurations '[metrics.context.connect.kafka.cluster.id, metrics.context.connect.group.id]' were supplied but are not used yet. [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,752 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,752 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,752 INFO || Kafka startTimeMs: 1707907857752 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,754 INFO || [Producer clientId=connector-producer-employee-0] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:57 2024-02-14 10:50:57,757 INFO || [Worker clientId=connect-1, groupId=1] Finished starting connectors and tasks [org.apache.kafka.connect.runtime.distributed.DistributedHerder] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || Starting MySqlConnectorTask with configuration: [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || connector.class = io.debezium.connector.mysql.MySqlConnector [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || database.user = replication [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || database.server.id = 184054 [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || transforms = topicName [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || transforms.topicName.regex = (.*)\.(.*)\.(.*) [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || schema.history.internal.kafka.bootstrap.servers = kafka:9092 [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || database.port = 3306 [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || include.schema.changes = true [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || key.converter.schemas.enable = true [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || topic.prefix = employee [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || schema.history.internal.kafka.topic = schema-changes.employee [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,758 INFO || task.class = io.debezium.connector.mysql.MySqlConnectorTask [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || database.hostname = mariadb-slave [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || database.password = ******** [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || value.converter.schemas.enable = true [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || name = employee [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || transforms.topicName.type = org.apache.kafka.connect.transforms.RegexRouter [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || transforms.topicName.replacement = cdc.employee.$3.v1 [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || table.include.list = mydb.employee [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || value.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || key.converter = org.apache.kafka.connect.json.JsonConverter [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || database.include.list = mydb [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || snapshot.mode = when_needed [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || Loading the custom source info struct maker plugin: io.debezium.connector.mysql.MySqlSourceInfoStructMaker [io.debezium.config.CommonConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,759 INFO || Using io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter [io.debezium.connector.mysql.MySqlConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,760 INFO || Loading the custom topic naming strategy plugin: io.debezium.schema.DefaultTopicNamingStrategy [io.debezium.config.CommonConnectorConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,771 INFO || No previous offsets found [io.debezium.connector.common.BaseSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,791 INFO || KafkaSchemaHistory Consumer config: {key.deserializer=org.apache.kafka.common.serialization.StringDeserializer, value.deserializer=org.apache.kafka.common.serialization.StringDeserializer, enable.auto.commit=false, group.id=employee-schemahistory, bootstrap.servers=kafka:9092, fetch.min.bytes=1, session.timeout.ms=10000, auto.offset.reset=earliest, client.id=employee-schemahistory} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-02-14 11:50:57 2024-02-14 10:50:57,791 INFO || KafkaSchemaHistory Producer config: {retries=1, value.serializer=org.apache.kafka.common.serialization.StringSerializer, acks=1, batch.size=32768, max.block.ms=10000, bootstrap.servers=kafka:9092, buffer.memory=1048576, key.serializer=org.apache.kafka.common.serialization.StringSerializer, client.id=employee-schemahistory, linger.ms=0} [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-02-14 11:50:57 2024-02-14 10:50:57,792 INFO || Requested thread factory for connector MySqlConnector, id = employee named = db-history-config-check [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,793 INFO || Idempotence will be disabled because acks is set to 1, not set to 'all'. [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,793 INFO || ProducerConfig values: 2024-02-14 11:50:57 acks = 1 2024-02-14 11:50:57 auto.include.jmx.reporter = true 2024-02-14 11:50:57 batch.size = 32768 2024-02-14 11:50:57 bootstrap.servers = [kafka:9092] 2024-02-14 11:50:57 buffer.memory = 1048576 2024-02-14 11:50:57 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:57 client.id = employee-schemahistory 2024-02-14 11:50:57 compression.type = none 2024-02-14 11:50:57 connections.max.idle.ms = 540000 2024-02-14 11:50:57 delivery.timeout.ms = 120000 2024-02-14 11:50:57 enable.idempotence = false 2024-02-14 11:50:57 interceptor.classes = [] 2024-02-14 11:50:57 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 2024-02-14 11:50:57 linger.ms = 0 2024-02-14 11:50:57 max.block.ms = 10000 2024-02-14 11:50:57 max.in.flight.requests.per.connection = 5 2024-02-14 11:50:57 max.request.size = 1048576 2024-02-14 11:50:57 metadata.max.age.ms = 300000 2024-02-14 11:50:57 metadata.max.idle.ms = 300000 2024-02-14 11:50:57 metric.reporters = [] 2024-02-14 11:50:57 metrics.num.samples = 2 2024-02-14 11:50:57 metrics.recording.level = INFO 2024-02-14 11:50:57 metrics.sample.window.ms = 30000 2024-02-14 11:50:57 partitioner.adaptive.partitioning.enable = true 2024-02-14 11:50:57 partitioner.availability.timeout.ms = 0 2024-02-14 11:50:57 partitioner.class = null 2024-02-14 11:50:57 partitioner.ignore.keys = false 2024-02-14 11:50:57 receive.buffer.bytes = 32768 2024-02-14 11:50:57 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:57 reconnect.backoff.ms = 50 2024-02-14 11:50:57 request.timeout.ms = 30000 2024-02-14 11:50:57 retries = 1 2024-02-14 11:50:57 retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.client.callback.handler.class = null 2024-02-14 11:50:57 sasl.jaas.config = null 2024-02-14 11:50:57 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:57 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:57 sasl.kerberos.service.name = null 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.callback.handler.class = null 2024-02-14 11:50:57 sasl.login.class = null 2024-02-14 11:50:57 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:57 sasl.login.read.timeout.ms = null 2024-02-14 11:50:57 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:57 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:57 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:57 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.mechanism = GSSAPI 2024-02-14 11:50:57 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:57 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:57 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:57 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:57 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:57 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:57 security.protocol = PLAINTEXT 2024-02-14 11:50:57 security.providers = null 2024-02-14 11:50:57 send.buffer.bytes = 131072 2024-02-14 11:50:57 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:57 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:57 ssl.cipher.suites = null 2024-02-14 11:50:57 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:57 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:57 ssl.engine.factory.class = null 2024-02-14 11:50:57 ssl.key.password = null 2024-02-14 11:50:57 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:57 ssl.keystore.certificate.chain = null 2024-02-14 11:50:57 ssl.keystore.key = null 2024-02-14 11:50:57 ssl.keystore.location = null 2024-02-14 11:50:57 ssl.keystore.password = null 2024-02-14 11:50:57 ssl.keystore.type = JKS 2024-02-14 11:50:57 ssl.protocol = TLSv1.3 2024-02-14 11:50:57 ssl.provider = null 2024-02-14 11:50:57 ssl.secure.random.implementation = null 2024-02-14 11:50:57 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:57 ssl.truststore.certificates = null 2024-02-14 11:50:57 ssl.truststore.location = null 2024-02-14 11:50:57 ssl.truststore.password = null 2024-02-14 11:50:57 ssl.truststore.type = JKS 2024-02-14 11:50:57 transaction.timeout.ms = 60000 2024-02-14 11:50:57 transactional.id = null 2024-02-14 11:50:57 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 2024-02-14 11:50:57 [org.apache.kafka.clients.producer.ProducerConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,794 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,794 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,794 INFO || Kafka startTimeMs: 1707907857794 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,797 INFO || [Producer clientId=employee-schemahistory] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:57 2024-02-14 10:50:57,810 INFO || Closing connection before starting schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,811 INFO || Connection gracefully closed [io.debezium.jdbc.JdbcConnection] 2024-02-14 11:50:57 2024-02-14 10:50:57,811 INFO || Connector started for the first time, database schema history recovery will not be executed [io.debezium.connector.mysql.MySqlConnectorTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,812 INFO || ConsumerConfig values: 2024-02-14 11:50:57 allow.auto.create.topics = true 2024-02-14 11:50:57 auto.commit.interval.ms = 5000 2024-02-14 11:50:57 auto.include.jmx.reporter = true 2024-02-14 11:50:57 auto.offset.reset = earliest 2024-02-14 11:50:57 bootstrap.servers = [kafka:9092] 2024-02-14 11:50:57 check.crcs = true 2024-02-14 11:50:57 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:57 client.id = employee-schemahistory 2024-02-14 11:50:57 client.rack = 2024-02-14 11:50:57 connections.max.idle.ms = 540000 2024-02-14 11:50:57 default.api.timeout.ms = 60000 2024-02-14 11:50:57 enable.auto.commit = false 2024-02-14 11:50:57 exclude.internal.topics = true 2024-02-14 11:50:57 fetch.max.bytes = 52428800 2024-02-14 11:50:57 fetch.max.wait.ms = 500 2024-02-14 11:50:57 fetch.min.bytes = 1 2024-02-14 11:50:57 group.id = employee-schemahistory 2024-02-14 11:50:57 group.instance.id = null 2024-02-14 11:50:57 heartbeat.interval.ms = 3000 2024-02-14 11:50:57 interceptor.classes = [] 2024-02-14 11:50:57 internal.leave.group.on.close = true 2024-02-14 11:50:57 internal.throw.on.fetch.stable.offset.unsupported = false 2024-02-14 11:50:57 isolation.level = read_uncommitted 2024-02-14 11:50:57 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2024-02-14 11:50:57 max.partition.fetch.bytes = 1048576 2024-02-14 11:50:57 max.poll.interval.ms = 300000 2024-02-14 11:50:57 max.poll.records = 500 2024-02-14 11:50:57 metadata.max.age.ms = 300000 2024-02-14 11:50:57 metric.reporters = [] 2024-02-14 11:50:57 metrics.num.samples = 2 2024-02-14 11:50:57 metrics.recording.level = INFO 2024-02-14 11:50:57 metrics.sample.window.ms = 30000 2024-02-14 11:50:57 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 2024-02-14 11:50:57 receive.buffer.bytes = 65536 2024-02-14 11:50:57 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:57 reconnect.backoff.ms = 50 2024-02-14 11:50:57 request.timeout.ms = 30000 2024-02-14 11:50:57 retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.client.callback.handler.class = null 2024-02-14 11:50:57 sasl.jaas.config = null 2024-02-14 11:50:57 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:57 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:57 sasl.kerberos.service.name = null 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.callback.handler.class = null 2024-02-14 11:50:57 sasl.login.class = null 2024-02-14 11:50:57 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:57 sasl.login.read.timeout.ms = null 2024-02-14 11:50:57 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:57 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:57 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:57 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.mechanism = GSSAPI 2024-02-14 11:50:57 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:57 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:57 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:57 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:57 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:57 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:57 security.protocol = PLAINTEXT 2024-02-14 11:50:57 security.providers = null 2024-02-14 11:50:57 send.buffer.bytes = 131072 2024-02-14 11:50:57 session.timeout.ms = 10000 2024-02-14 11:50:57 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:57 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:57 ssl.cipher.suites = null 2024-02-14 11:50:57 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:57 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:57 ssl.engine.factory.class = null 2024-02-14 11:50:57 ssl.key.password = null 2024-02-14 11:50:57 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:57 ssl.keystore.certificate.chain = null 2024-02-14 11:50:57 ssl.keystore.key = null 2024-02-14 11:50:57 ssl.keystore.location = null 2024-02-14 11:50:57 ssl.keystore.password = null 2024-02-14 11:50:57 ssl.keystore.type = JKS 2024-02-14 11:50:57 ssl.protocol = TLSv1.3 2024-02-14 11:50:57 ssl.provider = null 2024-02-14 11:50:57 ssl.secure.random.implementation = null 2024-02-14 11:50:57 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:57 ssl.truststore.certificates = null 2024-02-14 11:50:57 ssl.truststore.location = null 2024-02-14 11:50:57 ssl.truststore.password = null 2024-02-14 11:50:57 ssl.truststore.type = JKS 2024-02-14 11:50:57 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 2024-02-14 11:50:57 [org.apache.kafka.clients.consumer.ConsumerConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,814 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,814 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,814 INFO || Kafka startTimeMs: 1707907857814 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,816 INFO || [Consumer clientId=employee-schemahistory, groupId=employee-schemahistory] Cluster ID: rDanNOBZT4KnWXmQRvMjlQ [org.apache.kafka.clients.Metadata] 2024-02-14 11:50:57 2024-02-14 10:50:57,818 INFO || [Consumer clientId=employee-schemahistory, groupId=employee-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,818 INFO || [Consumer clientId=employee-schemahistory, groupId=employee-schemahistory] Request joining group due to: consumer pro-actively leaving the group [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,818 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,818 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,819 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,819 INFO || App info kafka.consumer for employee-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,819 INFO || AdminClientConfig values: 2024-02-14 11:50:57 auto.include.jmx.reporter = true 2024-02-14 11:50:57 bootstrap.servers = [kafka:9092] 2024-02-14 11:50:57 client.dns.lookup = use_all_dns_ips 2024-02-14 11:50:57 client.id = employee-schemahistory 2024-02-14 11:50:57 connections.max.idle.ms = 300000 2024-02-14 11:50:57 default.api.timeout.ms = 60000 2024-02-14 11:50:57 metadata.max.age.ms = 300000 2024-02-14 11:50:57 metric.reporters = [] 2024-02-14 11:50:57 metrics.num.samples = 2 2024-02-14 11:50:57 metrics.recording.level = INFO 2024-02-14 11:50:57 metrics.sample.window.ms = 30000 2024-02-14 11:50:57 receive.buffer.bytes = 65536 2024-02-14 11:50:57 reconnect.backoff.max.ms = 1000 2024-02-14 11:50:57 reconnect.backoff.ms = 50 2024-02-14 11:50:57 request.timeout.ms = 30000 2024-02-14 11:50:57 retries = 1 2024-02-14 11:50:57 retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.client.callback.handler.class = null 2024-02-14 11:50:57 sasl.jaas.config = null 2024-02-14 11:50:57 sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-02-14 11:50:57 sasl.kerberos.min.time.before.relogin = 60000 2024-02-14 11:50:57 sasl.kerberos.service.name = null 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.jitter = 0.05 2024-02-14 11:50:57 sasl.kerberos.ticket.renew.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.callback.handler.class = null 2024-02-14 11:50:57 sasl.login.class = null 2024-02-14 11:50:57 sasl.login.connect.timeout.ms = null 2024-02-14 11:50:57 sasl.login.read.timeout.ms = null 2024-02-14 11:50:57 sasl.login.refresh.buffer.seconds = 300 2024-02-14 11:50:57 sasl.login.refresh.min.period.seconds = 60 2024-02-14 11:50:57 sasl.login.refresh.window.factor = 0.8 2024-02-14 11:50:57 sasl.login.refresh.window.jitter = 0.05 2024-02-14 11:50:57 sasl.login.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.login.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.mechanism = GSSAPI 2024-02-14 11:50:57 sasl.oauthbearer.clock.skew.seconds = 30 2024-02-14 11:50:57 sasl.oauthbearer.expected.audience = null 2024-02-14 11:50:57 sasl.oauthbearer.expected.issuer = null 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-02-14 11:50:57 sasl.oauthbearer.jwks.endpoint.url = null 2024-02-14 11:50:57 sasl.oauthbearer.scope.claim.name = scope 2024-02-14 11:50:57 sasl.oauthbearer.sub.claim.name = sub 2024-02-14 11:50:57 sasl.oauthbearer.token.endpoint.url = null 2024-02-14 11:50:57 security.protocol = PLAINTEXT 2024-02-14 11:50:57 security.providers = null 2024-02-14 11:50:57 send.buffer.bytes = 131072 2024-02-14 11:50:57 socket.connection.setup.timeout.max.ms = 30000 2024-02-14 11:50:57 socket.connection.setup.timeout.ms = 10000 2024-02-14 11:50:57 ssl.cipher.suites = null 2024-02-14 11:50:57 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-02-14 11:50:57 ssl.endpoint.identification.algorithm = https 2024-02-14 11:50:57 ssl.engine.factory.class = null 2024-02-14 11:50:57 ssl.key.password = null 2024-02-14 11:50:57 ssl.keymanager.algorithm = SunX509 2024-02-14 11:50:57 ssl.keystore.certificate.chain = null 2024-02-14 11:50:57 ssl.keystore.key = null 2024-02-14 11:50:57 ssl.keystore.location = null 2024-02-14 11:50:57 ssl.keystore.password = null 2024-02-14 11:50:57 ssl.keystore.type = JKS 2024-02-14 11:50:57 ssl.protocol = TLSv1.3 2024-02-14 11:50:57 ssl.provider = null 2024-02-14 11:50:57 ssl.secure.random.implementation = null 2024-02-14 11:50:57 ssl.trustmanager.algorithm = PKIX 2024-02-14 11:50:57 ssl.truststore.certificates = null 2024-02-14 11:50:57 ssl.truststore.location = null 2024-02-14 11:50:57 ssl.truststore.password = null 2024-02-14 11:50:57 ssl.truststore.type = JKS 2024-02-14 11:50:57 [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,820 INFO || These configurations '[value.serializer, acks, batch.size, max.block.ms, buffer.memory, key.serializer, linger.ms]' were supplied but are not used yet. [org.apache.kafka.clients.admin.AdminClientConfig] 2024-02-14 11:50:57 2024-02-14 10:50:57,820 INFO || Kafka version: 3.6.1 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,820 INFO || Kafka commitId: 5e3c2b738d253ff5 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,820 INFO || Kafka startTimeMs: 1707907857820 [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,841 INFO || Database schema history topic '(name=schema-changes.employee, numPartitions=1, replicationFactor=default, replicasAssignments=null, configs={cleanup.policy=delete, retention.ms=9223372036854775807, retention.bytes=-1})' created [io.debezium.storage.kafka.history.KafkaSchemaHistory] 2024-02-14 11:50:57 2024-02-14 10:50:57,841 INFO || App info kafka.admin.client for employee-schemahistory unregistered [org.apache.kafka.common.utils.AppInfoParser] 2024-02-14 11:50:57 2024-02-14 10:50:57,842 INFO || Metrics scheduler closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,842 INFO || Closing reporter org.apache.kafka.common.metrics.JmxReporter [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,842 INFO || Metrics reporters closed [org.apache.kafka.common.metrics.Metrics] 2024-02-14 11:50:57 2024-02-14 10:50:57,842 INFO || Reconnecting after finishing schema recovery [io.debezium.connector.mysql.MySqlConnectorTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,854 INFO || Requested thread factory for connector MySqlConnector, id = employee named = SignalProcessor [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,861 INFO || Requested thread factory for connector MySqlConnector, id = employee named = change-event-source-coordinator [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,861 INFO || Requested thread factory for connector MySqlConnector, id = employee named = blocking-snapshot [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,862 INFO || Creating thread debezium-mysqlconnector-employee-change-event-source-coordinator [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,864 INFO MySQL|employee|snapshot Metrics registered [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,864 INFO MySQL|employee|snapshot Context created [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-02-14 11:50:57 2024-02-14 10:50:57,865 INFO || SignalProcessor started. Scheduling it every 5000ms [io.debezium.pipeline.signal.SignalProcessor] 2024-02-14 11:50:57 2024-02-14 10:50:57,865 INFO || Creating thread debezium-mysqlconnector-employee-SignalProcessor [io.debezium.util.Threads] 2024-02-14 11:50:57 2024-02-14 10:50:57,866 INFO || WorkerSourceTask{id=employee-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask] 2024-02-14 11:50:57 2024-02-14 10:50:57,867 INFO MySQL|employee|snapshot No previous offset has been found [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,867 INFO MySQL|employee|snapshot According to the connector configuration both schema and data will be snapshotted [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,868 INFO MySQL|employee|snapshot Snapshot step 1 - Preparing [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,871 INFO MySQL|employee|snapshot Snapshot step 2 - Determining captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,871 INFO MySQL|employee|snapshot Read list of available databases [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,872 INFO MySQL|employee|snapshot list of available databases is: [information_schema, mydb, mysql, performance_schema] [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,872 INFO MySQL|employee|snapshot Read list of available tables in each database [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,874 INFO MySQL|employee|snapshot snapshot continuing with database(s): [mydb] [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,874 INFO MySQL|employee|snapshot Adding table mydb.employee to the list of capture schema tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,875 INFO MySQL|employee|snapshot Created connection pool with 1 threads [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,875 INFO MySQL|employee|snapshot Snapshot step 3 - Locking captured tables [mydb.employee] [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,877 INFO MySQL|employee|snapshot Flush and obtain global read lock to prevent writes to database [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,879 INFO MySQL|employee|snapshot Snapshot step 4 - Determining snapshot offset [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,881 INFO MySQL|employee|snapshot Read binlog position of MySQL primary server [io.debezium.connector.mysql.strategy.mysql.MySqlConnectorAdapter] 2024-02-14 11:50:57 2024-02-14 10:50:57,882 INFO MySQL|employee|snapshot Snapshot step 5 - Reading structure of captured tables [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:57 2024-02-14 10:50:57,882 INFO MySQL|employee|snapshot All eligible tables schema should be captured, capturing: [mydb.employee] [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,080 INFO MySQL|employee|snapshot Reading structure of database 'mydb' [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,102 INFO MySQL|employee|snapshot Snapshot step 6 - Persisting schema history [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,123 INFO MySQL|employee|snapshot Releasing global read lock to enable MySQL writes [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,124 INFO MySQL|employee|snapshot Writes to MySQL tables prevented for a total of 00:00:00.245 [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,124 INFO MySQL|employee|snapshot Snapshot step 7 - Snapshotting data [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,124 INFO MySQL|employee|snapshot Creating snapshot worker pool with 1 worker thread(s) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,125 INFO MySQL|employee|snapshot For table 'mydb.employee' using select statement: 'SELECT `id`, `name` FROM `mydb`.`employee`' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,126 INFO MySQL|employee|snapshot Estimated row count for table mydb.employee is OptionalLong[2] [io.debezium.connector.mysql.MySqlSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,127 INFO MySQL|employee|snapshot Exporting data from table 'mydb.employee' (1 of 1 tables) [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,134 INFO MySQL|employee|snapshot Finished exporting 2 records for table 'mydb.employee' (1 of 1 tables); total duration '00:00:00.007' [io.debezium.relational.RelationalSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,136 INFO MySQL|employee|snapshot Snapshot - Final stage [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,136 INFO MySQL|employee|snapshot Snapshot completed [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,156 INFO MySQL|employee|snapshot Snapshot ended with SnapshotResult [status=COMPLETED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000002, currentBinlogPosition=1643, currentRowNumber=0, serverId=0, sourceTime=2024-02-14T10:50:58Z, threadId=-1, currentQuery=null, tableIds=[mydb.employee], databaseName=mydb], snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000002, restartBinlogPosition=1643, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]] [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-02-14 11:50:58 2024-02-14 10:50:58,158 INFO MySQL|employee|streaming Requested thread factory for connector MySqlConnector, id = employee named = binlog-client [io.debezium.util.Threads] 2024-02-14 11:50:58 2024-02-14 10:50:58,161 INFO MySQL|employee|streaming Starting streaming [io.debezium.pipeline.ChangeEventSourceCoordinator] 2024-02-14 11:50:58 2024-02-14 10:50:58,162 INFO MySQL|employee|streaming Skip 0 events on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,162 INFO MySQL|employee|streaming Skip 0 rows on streaming start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,162 INFO MySQL|employee|streaming Creating thread debezium-mysqlconnector-employee-binlog-client [io.debezium.util.Threads] 2024-02-14 11:50:58 2024-02-14 10:50:58,163 INFO MySQL|employee|streaming Creating thread debezium-mysqlconnector-employee-binlog-client [io.debezium.util.Threads] 2024-02-14 11:50:58 Feb 14, 2024 10:50:58 AM com.github.shyiko.mysql.binlog.BinaryLogClient connect 2024-02-14 11:50:58 INFO: Connected to mariadb-slave:3306 at mysql-bin.000002/1643 (sid:184054, cid:10) 2024-02-14 11:50:58 2024-02-14 10:50:58,168 INFO MySQL|employee|binlog Connected to binlog at mariadb-slave:3306, starting at MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=mysql-bin.000002, currentBinlogPosition=1643, currentRowNumber=0, serverId=0, sourceTime=2024-02-14T10:50:58Z, threadId=-1, currentQuery=null, tableIds=[mydb.employee], databaseName=mydb], snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=mysql-bin.000002, restartBinlogPosition=1643, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]] [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,168 INFO MySQL|employee|streaming Waiting for keepalive thread to start [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,168 INFO MySQL|employee|binlog Creating thread debezium-mysqlconnector-employee-binlog-client [io.debezium.util.Threads] 2024-02-14 11:50:58 2024-02-14 10:50:58,271 INFO MySQL|employee|streaming Keepalive thread is running [io.debezium.connector.mysql.MySqlStreamingChangeEventSource] 2024-02-14 11:50:58 2024-02-14 10:50:58,381 WARN || [Producer clientId=connector-producer-employee-0] Error while fetching metadata with correlation id 3 : {employee=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2024-02-14 11:50:58 2024-02-14 10:50:58,507 WARN || [Producer clientId=connector-producer-employee-0] Error while fetching metadata with correlation id 6 : {cdc.employee.employee.v1=LEADER_NOT_AVAILABLE} [org.apache.kafka.clients.NetworkClient] 2024-02-14 11:50:58 2024-02-14 10:50:58,610 INFO || [Producer clientId=connector-producer-employee-0] Resetting the last seen epoch of partition employee-0 to 0 since the associated topicId changed from null to pDdAuB1OQyCIfhHd87PTUg [org.apache.kafka.clients.Metadata]